Beyond Control Panels: Direct Manipulation for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Bradel, Lauren; North, Chris
2013-07-19
Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less
Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V
2014-09-01
Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Saccadic Corollary Discharge Underlies Stable Visual Perception
Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.
2016-01-01
Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
Geoscience data visualization and analysis using GeoMapApp
NASA Astrophysics Data System (ADS)
Ferrini, Vicki; Carbotte, Suzanne; Ryan, William; Chan, Samantha
2013-04-01
Increased availability of geoscience data resources has resulted in new opportunities for developing visualization and analysis tools that not only promote data integration and synthesis, but also facilitate quantitative cross-disciplinary access to data. Interdisciplinary investigations, in particular, frequently require visualizations and quantitative access to specialized data resources across disciplines, which has historically required specialist knowledge of data formats and software tools. GeoMapApp (www.geomapapp.org) is a free online data visualization and analysis tool that provides direct quantitative access to a wide variety of geoscience data for a broad international interdisciplinary user community. While GeoMapApp provides access to online data resources, it can also be packaged to work offline through the deployment of a small portable hard drive. This mode of operation can be particularly useful during field programs to provide functionality and direct access to data when a network connection is not possible. Hundreds of data sets from a variety of repositories are directly accessible in GeoMapApp, without the need for the user to understand the specifics of file formats or data reduction procedures. Available data include global and regional gridded data, images, as well as tabular and vector datasets. In addition to basic visualization and data discovery functionality, users are provided with simple tools for creating customized maps and visualizations and to quantitatively interrogate data. Specialized data portals with advanced functionality are also provided for power users to further analyze data resources and access underlying component datasets. Users may import and analyze their own geospatial datasets by loading local versions of geospatial data and can access content made available through Web Feature Services (WFS) and Web Map Services (WMS). Once data are loaded in GeoMapApp, a variety options are provided to export data and/or 2D/3D visualizations into common formats including grids, images, text files, spreadsheets, etc. Examples of interdisciplinary investigations that make use of GeoMapApp visualization and analysis functionality will be provided.
Visual receptive field properties of cells in the optic tectum of the archer fish.
Ben-Tov, Mor; Kopilevich, Ivgeny; Donchin, Opher; Ben-Shahar, Ohad; Giladi, Chen; Segev, Ronen
2013-08-01
The archer fish is well known for its extreme visual behavior in shooting water jets at prey hanging on vegetation above water. This fish is a promising model in the study of visual system function because it can be trained to respond to artificial targets and thus to provide valuable psychophysical data. Although much behavioral data have indeed been collected over the past two decades, little is known about the functional organization of the main visual area supporting this visual behavior, namely, the fish optic tectum. In this article we focus on a fundamental aspect of this functional organization and provide a detailed analysis of receptive field properties of cells in the archer fish optic tectum. Using extracellular measurements to record activities of single cells, we first measure their retinotectal mapping. We then determine their receptive field properties such as size, selectivity for stimulus direction and orientation, tuning for spatial frequency, and tuning for temporal frequency. Finally, on the basis of all these measurements, we demonstrate that optic tectum cells can be classified into three categories: orientation-tuned cells, direction-tuned cells, and direction-agnostic cells. Our results provide an essential basis for future investigations of information processing in the archer fish visual system.
Extended Wearing Trial of Trifield Lens Device for “Tunnel Vision”
Woods, Russell L.; Giorgi, Robert G.; Berson, Eliot L.; Peli, Eli
2009-01-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5 to 22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6 to 60, weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, 9 chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those 9 patients, at long-term follow-up (35 to 78 weeks), 3 reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9 to 38, degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed. PMID:20444130
Extended wearing trial of Trifield lens device for 'tunnel vision'.
Woods, Russell L; Giorgi, Robert G; Berson, Eliot L; Peli, Eli
2010-05-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5-22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6-60 weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, nine chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those nine patients, at long-term follow-up (35-78 weeks), three reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9-38 degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For reported difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed.
What and where information in the caudate tail guides saccades to visual objects
Yamamoto, Shinya; Monosov, Ilya E.; Yasuda, Masaharu; Hikosaka, Okihide
2012-01-01
We understand the world by making saccadic eye movements to various objects. However, it is unclear how a saccade can be aimed at a particular object, because two kinds of visual information, what the object is and where it is, are processed separately in the dorsal and ventral visual cortical pathways. Here we provide evidence suggesting that a basal ganglia circuit through the tail of the monkey caudate nucleus (CDt) guides such object-directed saccades. First, many CDt neurons responded to visual objects depending on where and what the objects were. Second, electrical stimulation in the CDt induced saccades whose directions matched the preferred directions of neurons at the stimulation site. Third, many CDt neurons increased their activity before saccades directed to the neurons’ preferred objects and directions in a free-viewing condition. Our results suggest that CDt neurons receive both ‘what’ and ‘where’ information and guide saccades to visual objects. PMID:22875934
Scientific Visualization, Seeing the Unseeable
LBNL
2017-12-09
June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in bo... June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.
Merlyn J. Paulson
1979-01-01
This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...
A survey of visualization systems for network security.
Shiravi, Hadi; Shiravi, Ali; Ghorbani, Ali A
2012-08-01
Security Visualization is a very young term. It expresses the idea that common visualization techniques have been designed for use cases that are not supportive of security-related data, demanding novel techniques fine tuned for the purpose of thorough analysis. Significant amount of work has been published in this area, but little work has been done to study this emerging visualization discipline. We offer a comprehensive review of network security visualization and provide a taxonomy in the form of five use-case classes encompassing nearly all recent works in this area. We outline the incorporated visualization techniques and data sources and provide an informative table to display our findings. From the analysis of these systems, we examine issues and concerns regarding network security visualization and provide guidelines and directions for future researchers and visual system developers.
Saunders, Jeffrey A.
2014-01-01
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194
Two-year-olds can begin to acquire verb meanings in socially impoverished contexts.
Arunachalam, Sudha
2013-12-01
By two years of age, toddlers are adept at recruiting social, observational, and linguistic cues to discover the meanings of words. Here, we ask how they fare in impoverished contexts in which linguistic cues are provided, but no social or visual information is available. Novel verbs are presented in a stream of syntactically informative sentences, but the sentences are not embedded in a social context, and no visual access to the verb's referent is provided until the test phase. The results provide insight into how toddlers may benefit from overhearing contexts in which they are not directly attending to the ambient speech, and in which no conversational context, visual referent, or child-directed conversation is available. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Peter; Kempler, Steven; Leptoukh, Gregory; Chen, Aijun
2010-01-01
This poster paper represents the NASA funded project that was to employ the latest three dimensional visualization technology to explore and provide direct data access to heterogeneous A-Train datasets. Google Earth (tm) provides foundation for organizing, visualizing, publishing and synergizing Earth science data .
The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.
Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni
2017-09-01
The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P < 0.05). To achieve better control of interaction forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.
Nawroth, Christian; von Borell, Eberhard
2015-05-01
Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.
Redundancy reduction explains the expansion of visual direction space around the cardinal axes.
Perrone, John A; Liston, Dorion B
2015-06-01
Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
Localized direction selective responses in the dendrites of visual interneurons of the fly
2010-01-01
Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983
Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data
Zhaoping, Li; Zhe, Li
2015-01-01
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341
Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.
2017-01-01
The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756
Visualization of solidification front phenomena
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Smith, Guy A.
1993-01-01
Directional solidification experiments have been utilized throughout the Materials Processing in Space Program to provide an experimental platform which minimizes variables in solidification experiments. Because of the wide-spread use of this experimental technique in space-based research, it has become apparent that a better understanding of all the phenomena occurring during solidification can be better understood if direct visualization of the solidification interface were possible.
Neural Circuit to Integrate Opposing Motions in the Visual Field.
Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander
2015-07-16
When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady
2015-01-01
The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…
Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division and Scientific Visualization Group
2018-05-07
Summer Lecture Series 2008: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Bo; State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101; Xia Jing
Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unitmore » responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.« less
Rotational dynamics of cargos at pauses during axonal transport.
Gu, Yan; Sun, Wei; Wang, Gufeng; Jeftinija, Ksenija; Jeftinija, Srdija; Fang, Ning
2012-01-01
Direct visualization of axonal transport in live neurons is essential for our understanding of the neuronal functions and the working mechanisms of microtubule-based motor proteins. Here we use the high-speed single particle orientation and rotational tracking technique to directly visualize the rotational dynamics of cargos in both active directional transport and pausing stages of axonal transport, with a temporal resolution of 2 ms. Both long and short pauses are imaged, and the correlations between the pause duration, the rotational behaviour of the cargo at the pause, and the moving direction after the pause are established. Furthermore, the rotational dynamics leading to switching tracks are visualized in detail. These first-time observations of cargo's rotational dynamics provide new insights on how kinesin and dynein motors take the cargo through the alternating stages of active directional transport and pause.
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.
Wiemers, Michael; Fischer, Martin H
2016-01-01
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Local statistics of retinal optic flow for self-motion through natural sceneries.
Calow, Dirk; Lappe, Markus
2007-12-01
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.
Direct Visualization of Shock Waves in Supersonic Space Shuttle Flight
NASA Technical Reports Server (NTRS)
OFarrell, J. M.; Rieckhoff, T. J.
2011-01-01
Direct observation of shock boundaries is rare. This Technical Memorandum describes direct observation of shock waves produced by the space shuttle vehicle during STS-114 and STS-110 in imagery provided by NASA s tracking cameras.
Backman, Chantal; Bruce, Natalie; Marck, Patricia; Vanderloo, Saskia
2016-01-01
The purpose of this quality improvement project was to determine the feasibility of using provider-led participatory visual methods to scrutinize 4 hospital units' infection prevention and control practices. Methods included provider-led photo walkabouts, photo elicitation sessions, and postimprovement photo walkabouts. Nurses readily engaged in using the methods to examine and improve their units' practices and reorganize their work environment.
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
Visual probes and methods for placing visual probes into subsurface areas
Clark, Don T.; Erickson, Eugene E.; Casper, William L.; Everett, David M.
2004-11-23
Visual probes and methods for placing visual probes into subsurface areas in either contaminated or non-contaminated sites are described. In one implementation, the method includes driving at least a portion of a visual probe into the ground using direct push, sonic drilling, or a combination of direct push and sonic drilling. Such is accomplished without providing an open pathway for contaminants or fugitive gases to reach the surface. According to one implementation, the invention includes an entry segment configured for insertion into the ground or through difficult materials (e.g., concrete, steel, asphalt, metals, or items associated with waste), at least one extension segment configured to selectively couple with the entry segment, at least one push rod, and a pressure cap. Additional implementations are contemplated.
Pilot vision considerations : the effect of age on binocular fusion time.
DOT National Transportation Integrated Search
1966-10-01
The study provides data regarding the relationship between vision performance and age of the individual. It has direct application to pilot visual tasks with respect to instrument panel displays, and to controller visual tasks in association with rad...
Capture of visual direction in dynamic vergence is reduced with flashed monocular lines.
Jaschinski, Wolfgang; Jainta, Stephanie; Schürer, Michael
2006-08-01
The visual direction of a continuously presented monocular object is captured by the visual direction of a closely adjacent binocular object, which questions the reliability of nonius lines for measuring vergence. This was shown by Erkelens, C. J., and van Ee, R. (1997a,b) [Capture of the visual direction: An unexpected phenomenon in binocular vision. Vision Research, 37, 1193-1196; Capture of the visual direction of monocular objects by adjacent binocular objects. Vision Research, 37, 1735-1745] stimulating dynamic vergence by a counter phase oscillation of two square random-dot patterns (one to each eye) that contained a smaller central dot-free gap (of variable width) with a vertical monocular line oscillating in phase with the random-dot pattern of the respective eye; subjects adjusted the motion-amplitude of the line until it was perceived as (nearly) stationary. With a continuously presented monocular line, we replicated capture of visual direction provided the dot-free gap was narrow: the adjusted motion-amplitude of the line was similar as the motion-amplitude of the random-dot pattern, although large vergence errors occurred. However, when we flashed the line for 67 ms at the moments of maximal and minimal disparity of the vergence stimulus, we found that the adjusted motion-amplitude of the line was smaller; thus, the capture effect appeared to be reduced with flashed nonius lines. Accordingly, we found that the objectively measured vergence gain was significantly correlated (r=0.8) with the motion-amplitude of the flashed monocular line when the separation between the line and the fusion contour was at least 32 min arc. In conclusion, if one wishes to estimate the dynamic vergence response with psychophysical methods, effects of capture of visual direction can be reduced by using flashed nonius lines.
ERIC Educational Resources Information Center
Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.
2015-01-01
Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…
Paleomagnetism.org: An online multi-platform open source environment for paleomagnetic data analysis
NASA Astrophysics Data System (ADS)
Koymans, Mathijs R.; Langereis, Cor G.; Pastor-Galán, Daniel; van Hinsbergen, Douwe J. J.
2016-08-01
This contribution provides an overview of Paleomagnetism.org, an open-source, multi-platform online environment for paleomagnetic data analysis. Paleomagnetism.org provides an interactive environment where paleomagnetic data can be interpreted, evaluated, visualized, and exported. The Paleomagnetism.org application is split in to an interpretation portal, a statistics portal, and a portal for miscellaneous paleomagnetic tools. In the interpretation portal, principle component analysis can be performed on visualized demagnetization diagrams. Interpreted directions and great circles can be combined to find great circle solutions. These directions can be used in the statistics portal, or exported as data and figures. The tools in the statistics portal cover standard Fisher statistics for directions and VGPs, including other statistical parameters used as reliability criteria. Other available tools include an eigenvector approach foldtest, two reversal test including a Monte Carlo simulation on mean directions, and a coordinate bootstrap on the original data. An implementation is included for the detection and correction of inclination shallowing in sediments following TK03.GAD. Finally we provide a module to visualize VGPs and expected paleolatitudes, declinations, and inclinations relative to widely used global apparent polar wander path models in coordinates of major continent-bearing plates. The tools in the miscellaneous portal include a net tectonic rotation (NTR) analysis to restore a body to its paleo-vertical and a bootstrapped oroclinal test using linear regressive techniques, including a modified foldtest around a vertical axis. Paleomagnetism.org provides an integrated approach for researchers to work with visualized (e.g. hemisphere projections, Zijderveld diagrams) paleomagnetic data. The application constructs a custom exportable file that can be shared freely and included in public databases. This exported file contains all data and can later be imported to the application by other researchers. The accessibility and simplicity through which paleomagnetic data can be interpreted, analyzed, visualized, and shared makes Paleomagnetism.org of interest to the community.
14 CFR 139.323 - Traffic and wind direction indicators.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Traffic and wind direction indicators. 139... CERTIFICATION OF AIRPORTS Operations § 139.323 Traffic and wind direction indicators. In a manner authorized by...) A wind cone that visually provides surface wind direction information to pilots. For each runway...
14 CFR 139.323 - Traffic and wind direction indicators.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Traffic and wind direction indicators. 139... CERTIFICATION OF AIRPORTS Operations § 139.323 Traffic and wind direction indicators. In a manner authorized by...) A wind cone that visually provides surface wind direction information to pilots. For each runway...
14 CFR 139.323 - Traffic and wind direction indicators.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Traffic and wind direction indicators. 139... CERTIFICATION OF AIRPORTS Operations § 139.323 Traffic and wind direction indicators. In a manner authorized by...) A wind cone that visually provides surface wind direction information to pilots. For each runway...
14 CFR 139.323 - Traffic and wind direction indicators.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Traffic and wind direction indicators. 139... CERTIFICATION OF AIRPORTS Operations § 139.323 Traffic and wind direction indicators. In a manner authorized by...) A wind cone that visually provides surface wind direction information to pilots. For each runway...
14 CFR 139.323 - Traffic and wind direction indicators.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Traffic and wind direction indicators. 139... CERTIFICATION OF AIRPORTS Operations § 139.323 Traffic and wind direction indicators. In a manner authorized by...) A wind cone that visually provides surface wind direction information to pilots. For each runway...
Optical Docking Aid Containing Fresnel Lenses
NASA Technical Reports Server (NTRS)
Pierce, Cole J.
1995-01-01
Proposed device provides self-contained visual cues to aid in docking. Similar to devices used to guide pilots in landing on aircraft carriers. Positions and directions of beams of light give observer visual cues of position relative to docking target point. Optical assemblies generate directed, diverging beams of light that, together, mark approach path to docking point. Conceived for use in docking spacecraft at Space Station Freedom, device adapted to numerous industrial docking and alignment applications.
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
Eye Contact Is Crucial for Referential Communication in Pet Dogs.
Savalli, Carine; Resende, Briseida; Gaunet, Florence
2016-01-01
Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.
Mapping the navigational knowledge of individually foraging ants, Myrmecia croslandi
Narendra, Ajay; Gourmaud, Sarah; Zeil, Jochen
2013-01-01
Ants are efficient navigators, guided by path integration and visual landmarks. Path integration is the primary strategy in landmark-poor habitats, but landmarks are readily used when available. The landmark panorama provides reliable information about heading direction, routes and specific location. Visual memories for guidance are often acquired along routes or near to significant places. Over what area can such locally acquired memories provide information for reaching a place? This question is unusually approachable in the solitary foraging Australian jack jumper ant, since individual foragers typically travel to one or two nest-specific foraging trees. We find that within 10 m from the nest, ants both with and without home vector information available from path integration return directly to the nest from all compass directions, after briefly scanning the panorama. By reconstructing panoramic views within the successful homing range, we show that in the open woodland habitat of these ants, snapshot memories acquired close to the nest provide sufficient navigational information to determine nest-directed heading direction over a surprisingly large area, including areas that animals may have not visited previously. PMID:23804615
Torsional ARC Effectively Expands the Visual Field in Hemianopia
Satgunam, PremNandhini; Peli, Eli
2012-01-01
Purpose Exotropia in congenital homonymous hemianopia has been reported to provide field expansion that is more useful when accompanied with harmonios anomalous retinal correspondence (HARC). Torsional strabismus with HARC provides a similar functional advantage. In a subject with hemianopia demonstrating a field expansion consistent with torsion we documented torsional strabismus and torsional HARC. Methods Monocular visual fields under binocular fixation conditions were plotted using a custom dichoptic visual field perimeter (DVF). The DVF was also modified to measure perceived visual directions under dissociated and associated conditions across the central 50° diameter field. The field expansion and retinal correspondence of a subject with torsional strabismus (along with exotropia and right hypertropia) with congenital homonymous hemianopia was compared to that of another exotropic subject with acquired homonymous hemianopia without torsion and to a control subject with minimal phoria. Torsional rotations of the eyes were calculated from fundus photographs and perimetry. Results Torsional ARC documented in the subject with congenital homonymous hemianopia provided a functional binocular field expansion up to 18°. Normal retinal correspondence was mapped for the full 50° visual field in the control subject and for the seeing field of the acquired homonymous hemianopia subject, limiting the functional field expansion benefit. Conclusions Torsional strabismus with ARC, when occurring with homonymous hemianopia provides useful field expansion in the lower and upper fields. Dichoptic perimetry permits documentation of ocular alignment (lateral, vertical and torsional) and perceived visual direction under binocular and monocular viewing conditions. Evaluating patients with congenital or early strabismus for HARC is useful when considering surgical correction, particularly in the presence of congenital homonymous hemianopia. PMID:22885782
Vessel, Edward A; Biederman, Irving; Subramaniam, Suresh; Greene, Michelle R
2016-07-01
An L-vertex, the point at which two contours coterminate, provides highly reliable evidence that a surface terminates at that vertex, thus providing the strongest constraint on the extraction of shape from images (Guzman, 1968). Such vertices are pervasive in our visual world but the importance of a statistical regularity about them has been underappreciated: The contours defining the vertex are (almost) always of the same direction of contrast with respect to the background (i.e., both darker or both lighter). Here we show that when the two contours are of different directions of contrast, the capacity of the L-vertex to signal the termination of a surface, as reflected in object recognition, is markedly reduced. Although image statistics have been implicated in determining the connectivity in the earliest cortical visual stage (V1) and in grouping during visual search, this finding provides evidence that such statistics are involved in later stages where object representations are derived from two-dimensional images.
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
Hasegawa, Sumitaka; Maruyama, Kouichi; Takenaka, Hikaru; Furukawa, Takako; Saga, Tsuneo
2009-08-18
The recent success with small fish as an animal model of cancer with the aid of fluorescence technique has attracted cancer modelers' attention because it would be possible to directly visualize tumor cells in vivo in real time. Here, we report a medaka model capable of allowing the observation of various cell behaviors of transplanted tumor cells, such as cell proliferation and metastasis, which were visualized easily in vivo. We established medaka melanoma (MM) cells stably expressing GFP and transplanted them into nonirradiated and irradiated medaka. The tumor cells were grown at the injection sites in medaka, and the spatiotemporal changes were visualized under a fluorescence stereoscopic microscope at a cellular-level resolution, and even at a single-cell level. Tumor dormancy and metastasis were also observed. Interestingly, in irradiated medaka, accelerated tumor growth and metastasis of the transplanted tumor cells were directly visualized. Our medaka model provides an opportunity to visualize in vivo tumor cells "as seen in a culture dish" and would be useful for in vivo tumor cell biology.
Advancing Water Science through Data Visualization
NASA Astrophysics Data System (ADS)
Li, X.; Troy, T.
2014-12-01
As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water science, they can absorb the knowledge about water indirectly and incite the awareness of water problems.
The Meaning of Visual Thinking Strategies for Nursing Students
ERIC Educational Resources Information Center
Moorman, Margaret M.
2013-01-01
Nurse educators are called upon to provide creative, innovative experiences for students in order to prepare nurses to work in complex healthcare settings. As part of this preparation, teaching observational and communication skills is critical for nurses and can directly affect patient outcomes. Visual thinking strategies (VTS) are a teaching…
Optical projectors simulate human eyes to establish operator's field of view
NASA Technical Reports Server (NTRS)
Beam, R. A.
1966-01-01
Device projects visual pattern limits of the field of view of an operator as his eyes are directed at a given point on a control panel. The device, which consists of two projectors, provides instant evaluation of visual ability at a point on a panel.
Realistic tissue visualization using photoacoustic image
NASA Astrophysics Data System (ADS)
Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong
2018-02-01
Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.
The Human Retrosplenial Cortex and Thalamus Code Head Direction in a Global Reference Frame.
Shine, Jonathan P; Valdés-Herrera, José P; Hegarty, Mary; Wolbers, Thomas
2016-06-15
Spatial navigation is a multisensory process involving integration of visual and body-based cues. In rodents, head direction (HD) cells, which are most abundant in the thalamus, integrate these cues to code facing direction. Human fMRI studies examining HD coding in virtual environments (VE) have reported effects in retrosplenial complex and (pre-)subiculum, but not the thalamus. Furthermore, HD coding appeared insensitive to global landmarks. These tasks, however, provided only visual cues for orientation, and attending to global landmarks did not benefit task performance. In the present study, participants explored a VE comprising four separate locales, surrounded by four global landmarks. To provide body-based cues, participants wore a head-mounted display so that physical rotations changed facing direction in the VE. During subsequent MRI scanning, subjects saw stationary views of the environment and judged whether their orientation was the same as in the preceding trial. Parameter estimates extracted from retrosplenial cortex and the thalamus revealed significantly reduced BOLD responses when HD was repeated. Moreover, consistent with rodent findings, the signal did not continue to adapt over repetitions of the same HD. These results were supported by a whole-brain analysis showing additional repetition suppression in the precuneus. Together, our findings suggest that: (1) consistent with the rodent literature, the human thalamus may integrate visual and body-based, orientation cues; (2) global reference frame cues can be used to integrate HD across separate individual locales; and (3) immersive training procedures providing full body-based cues may help to elucidate the neural mechanisms supporting spatial navigation. In rodents, head direction (HD) cells signal facing direction in the environment via increased firing when the animal assumes a certain orientation. Distinct brain regions, the retrosplenial cortex (RSC) and thalamus, code for visual and vestibular cues of orientation, respectively. Putative HD signals have been observed in human RSC but not the thalamus, potentially because body-based cues were not provided. Here, participants encoded HD in a novel virtual environment while wearing a head-mounted display to provide body-based cues for orientation. In subsequent fMRI scanning, we found evidence of an HD signal in RSC, thalamus, and precuneus. These findings harmonize rodent and human data, and suggest that immersive training procedures provide a viable way to examine the neural basis of navigation. Copyright © 2016 the authors 0270-6474/16/366371-11$15.00/0.
The role of peripheral vision in saccade planning: learning from people with tunnel vision.
Luo, Gang; Vargas-Martin, Fernando; Peli, Eli
2008-12-22
Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7 degrees-16 degrees) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n = 9). In the walking experiment, the patients (n = 5) and normal controls (n = 3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the large extent of the top-down mechanism influence on eye movement control.
Role of peripheral vision in saccade planning: Learning from people with tunnel vision
Luo, Gang; Vargas-Martin, Fernando; Peli, Eli
2008-01-01
Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7°–16°) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n=9). In the walking experiment, the patients (n=5) and normal controls (n=3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the extent of the top-down mechanism influence on eye movement control. PMID:19146326
Heyers, Dominik; Manns, Martina; Luksch, Harald; Güntürkün, Onur; Mouritsen, Henrik
2007-09-26
The magnetic compass of migratory birds has been suggested to be light-dependent. Retinal cryptochrome-expressing neurons and a forebrain region, "Cluster N", show high neuronal activity when night-migratory songbirds perform magnetic compass orientation. By combining neuronal tracing with behavioral experiments leading to sensory-driven gene expression of the neuronal activity marker ZENK during magnetic compass orientation, we demonstrate a functional neuronal connection between the retinal neurons and Cluster N via the visual thalamus. Thus, the two areas of the central nervous system being most active during magnetic compass orientation are part of an ascending visual processing stream, the thalamofugal pathway. Furthermore, Cluster N seems to be a specialized part of the visual wulst. These findings strongly support the hypothesis that migratory birds use their visual system to perceive the reference compass direction of the geomagnetic field and that migratory birds "see" the reference compass direction provided by the geomagnetic field.
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
Park, Hyojin; Kayser, Christoph; Thut, Gregor; Gross, Joachim
2016-01-01
During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing. DOI: http://dx.doi.org/10.7554/eLife.14521.001 PMID:27146891
Flow visualization methods for field test verification of CFD analysis of an open gloveport
Strons, Philip; Bailey, James L.
2017-01-01
Anemometer readings alone cannot provide a complete picture of air flow patterns at an open gloveport. Having a means to visualize air flow for field tests in general provides greater insight by indicating direction in addition to the magnitude of the air flow velocities in the region of interest. Furthermore, flow visualization is essential for Computational Fluid Dynamics (CFD) verification, where important modeling assumptions play a significant role in analyzing the chaotic nature of low-velocity air flow. A good example is shown Figure 1, where an unexpected vortex pattern occurred during a field test that could not have been measuredmore » relying only on anemometer readings. Here by, observing and measuring the patterns of the smoke flowing into the gloveport allowed the CFD model to be appropriately updated to match the actual flow velocities in both magnitude and direction.« less
Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F
2013-04-01
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.
Intuitive Visualization of Transient Flow: Towards a Full 3D Tool
NASA Astrophysics Data System (ADS)
Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph
2015-04-01
Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.
van Kerkoerle, Timo; Self, Matthew W.; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Poort, Jasper; van der Togt, Chris; Roelfsema, Pieter R.
2014-01-01
Cognitive functions rely on the coordinated activity of neurons in many brain regions, but the interactions between cortical areas are not yet well understood. Here we investigated whether low-frequency (α) and high-frequency (γ) oscillations characterize different directions of information flow in monkey visual cortex. We recorded from all layers of the primary visual cortex (V1) and found that γ-waves are initiated in input layer 4 and propagate to the deep and superficial layers of cortex, whereas α-waves propagate in the opposite direction. Simultaneous recordings from V1 and downstream area V4 confirmed that γ- and α-waves propagate in the feedforward and feedback direction, respectively. Microstimulation in V1 elicited γ-oscillations in V4, whereas microstimulation in V4 elicited α-oscillations in V1, thus providing causal evidence for the opposite propagation of these rhythms. Furthermore, blocking NMDA receptors, thought to be involved in feedback processing, suppressed α while boosting γ. These results provide new insights into the relation between brain rhythms and cognition. PMID:25205811
van Kerkoerle, Timo; Self, Matthew W; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Poort, Jasper; van der Togt, Chris; Roelfsema, Pieter R
2014-10-07
Cognitive functions rely on the coordinated activity of neurons in many brain regions, but the interactions between cortical areas are not yet well understood. Here we investigated whether low-frequency (α) and high-frequency (γ) oscillations characterize different directions of information flow in monkey visual cortex. We recorded from all layers of the primary visual cortex (V1) and found that γ-waves are initiated in input layer 4 and propagate to the deep and superficial layers of cortex, whereas α-waves propagate in the opposite direction. Simultaneous recordings from V1 and downstream area V4 confirmed that γ- and α-waves propagate in the feedforward and feedback direction, respectively. Microstimulation in V1 elicited γ-oscillations in V4, whereas microstimulation in V4 elicited α-oscillations in V1, thus providing causal evidence for the opposite propagation of these rhythms. Furthermore, blocking NMDA receptors, thought to be involved in feedback processing, suppressed α while boosting γ. These results provide new insights into the relation between brain rhythms and cognition.
The primary visual cortex in the neural circuit for visual orienting
NASA Astrophysics Data System (ADS)
Zhaoping, Li
The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.
Cognitive and psychological science insights to improve climate change data visualization
NASA Astrophysics Data System (ADS)
Harold, Jordan; Lorenzoni, Irene; Shipley, Thomas F.; Coventry, Kenny R.
2016-12-01
Visualization of climate data plays an integral role in the communication of climate change findings to both expert and non-expert audiences. The cognitive and psychological sciences can provide valuable insights into how to improve visualization of climate data based on knowledge of how the human brain processes visual and linguistic information. We review four key research areas to demonstrate their potential to make data more accessible to diverse audiences: directing visual attention, visual complexity, making inferences from visuals, and the mapping between visuals and language. We present evidence-informed guidelines to help climate scientists increase the accessibility of graphics to non-experts, and illustrate how the guidelines can work in practice in the context of Intergovernmental Panel on Climate Change graphics.
Functional Dissociation between Perception and Action Is Evident Early in Life
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi
2012-01-01
The functional distinction between vision for perception and vision for action is well documented in the mature visual system. Ganel and colleagues recently provided direct evidence for this dissociation, showing that while visual processing for perception follows Weber's fundamental law of psychophysics, action violates this law. We tracked the…
Creating visual explanations improves learning.
Bobek, Eliza; Tversky, Barbara
2016-01-01
Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.
Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P
2017-04-01
A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.
A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.
Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter
2016-01-01
Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.
He, Longjun; Ming, Xing; Liu, Qian
2014-04-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.
Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif
2016-01-01
Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman's correlation. We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8-14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman's ρ=0.53; P=0.029). Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner's view, and give particular emphasis on fundus examination.
Animating streamlines with repeated asymmetric patterns for steady flow visualization
NASA Astrophysics Data System (ADS)
Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee
2012-01-01
Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.
ERIC Educational Resources Information Center
Koh, Hwan Cui; Milne, Elizabeth; Dobkins, Karen
2010-01-01
The magnocellular (M) pathway hypothesis proposes that impaired visual motion perception observed in individuals with Autism Spectrum Disorders (ASD) might be mediated by atypical functioning of the subcortical M pathway, as this pathway provides the bulk of visual input to cortical motion detectors. To test this hypothesis, we measured luminance…
Visual display aid for orbital maneuvering - Design considerations
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1993-01-01
This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.
Oculometric Assessment of Dynamic Visual Processing
NASA Technical Reports Server (NTRS)
Liston, Dorion Bryce; Stone, Lee
2014-01-01
Eye movements are the most frequent (3 per second), shortest-latency (150-250 ms), and biomechanically simplest (1 joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states (e.g., Diefendorf and Dodge, 1908). We developed a 15-minute behavioral tracking protocol consisting of randomized stepramp radial target motion to assess several aspects of the behavioral response to dynamic visual motion, including pursuit initiation, steadystate tracking, direction-tuning, and speed-tuning thresholds. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance (Stone and Krauzlis, 2003; Krukowski and Stone, 2005; Stone et al, 2009; Liston and Stone, 2014), and may prove to be a useful assessment tool for functional impairments of dynamic visual processing.
Is goal-directed attentional guidance just intertrial priming? A review.
Lamy, Dominique F; Kristjánsson, Arni
2013-07-01
According to most models of selective visual attention, our goals at any given moment and saliency in the visual field determine attentional priority. But selection is not carried out in isolation--we typically track objects through space and time. This is not well captured within the distinction between goal-directed and saliency-based attentional guidance. Recent studies have shown that selection is strongly facilitated when the characteristics of the objects to be attended and of those to be ignored remain constant between consecutive selections. These studies have generated the proposal that goal-directed or top-down effects are best understood as intertrial priming effects. Here, we provide a detailed overview and critical appraisal of the arguments, experimental strategies, and findings that have been used to promote this idea, along with a review of studies providing potential counterarguments. We divide this review according to different types of attentional control settings that observers are thought to adopt during visual search: feature-based settings, dimension-based settings, and singleton detection mode. We conclude that priming accounts for considerable portions of effects attributed to top-down guidance, but that top-down guidance can be independent of intertrial priming.
Yu, Chen; Smith, Linda B.
2013-01-01
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner. PMID:24236151
Use of an augmented-vision device for visual search by patients with tunnel vision.
Luo, Gang; Peli, Eli
2006-09-01
To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks.
NASA Astrophysics Data System (ADS)
Seki, T.; Iguchi, R.; Takanashi, K.; Uchida, K.
2018-04-01
Spatial distribution of temperature modulation due to the anomalous Ettingshausen effect (AEE) is visualized in a ferromagnetic FePt thin film with in-plane and out-of-plane magnetizations using the lock-in thermography technique. Comparing the AEE of FePt with the spin Peltier effect (SPE) of a Pt/yttrium iron garnet junction provides direct evidence of different symmetries of AEE and SPE. Our experiments and numerical calculations reveal that the distribution of heat sources induced by AEE strongly depends on the direction of magnetization, leading to the remarkable different temperature profiles in the FePt thin film between the in-plane and perpendicularly magnetized configurations.
Experimenter's Laboratory for Visualized Interactive Science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.
1994-01-01
ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.
Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor
2004-01-01
Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...
ERIC Educational Resources Information Center
American Foundation for the Blind, New York, NY.
This directory is a broad-based compilation of schools, agencies, organizations, and programs in the governmental and private, non-profit sectors that provide a wide variety of direct and indirect services, information, and other assistance to individuals with blindness or visual impairments. Organized information on producers and distributors of…
Visual imagery without visual perception: lessons from blind subjects
NASA Astrophysics Data System (ADS)
Bértolo, Helder
2014-08-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.
Modelling individual difference in visual categorization.
Shen, Jianhong; Palmeri, Thomas J
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.
Modelling individual difference in visual categorization
Shen, Jianhong; Palmeri, Thomas J.
2016-01-01
Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496
Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif
2016-01-01
Background Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. Methods In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman’s correlation. Results We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8–14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman’s ρ=0.53; P=0.029). Conclusion Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner’s view, and give particular emphasis on fundus examination. PMID:27574393
RIAD visual imaging branch assessment
NASA Technical Reports Server (NTRS)
Beam, Sherilee F.
1993-01-01
Every year the demand to visualize research efforts increases. The visualization provides the means to effectively analyze data and present the results. The technology support for visualization is constantly changing, improving, and being made available to users everywhere. As such, many researchers are entering into the practice of doing their own visualization in house - sometimes successfully, sometimes not. In an effort to keep pace with the visualization needs of researchers, the Visual Imaging Branch of the Research, Information, and Applications Division at NASA Langley Research Center has conducted an investigation into the current status of imaging technology and imaging production throughout the various research branches at the center. This investigation will allow the branch to evaluate its current resources and personnel in an effort to identify future directions for meeting the needs of the researchers at the center. The investigation team, which consisted of the ASEE fellow, the head of the video section, and the head of the photo section, developed an interview format that could be accomplished during a short interview period with researchers, and yet still provide adequate statistics about items such as in-house equipment and usage.
NASA Astrophysics Data System (ADS)
Niemeijer, Sander
2017-04-01
The ESA Atmospheric Toolbox (BEAT) is one of the ESA Sentinel Toolboxes. It consists of a set of software components to read, analyze, and visualize a wide range of atmospheric data products. In addition to the upcoming Sentinel-5P mission it supports a wide range of other atmospheric data products, including those of previous ESA missions, ESA Third Party missions, Copernicus Atmosphere Monitoring Service (CAMS), ground based data, etc. The toolbox consists of three main components that are called CODA, HARP and VISAN. CODA provides interfaces for direct reading of data from earth observation data files. These interfaces consist of command line applications, libraries, direct interfaces to scientific applications (IDL and MATLAB), and direct interfaces to programming languages (C, Fortran, Python, and Java). CODA provides a single interface to access data in a wide variety of data formats, including ASCII, binary, XML, netCDF, HDF4, HDF5, CDF, GRIB, RINEX, and SP3. HARP is a toolkit for reading, processing and inter-comparing satellite remote sensing data, model data, in-situ data, and ground based remote sensing data. The main goal of HARP is to assist in the inter-comparison of datasets. By appropriately chaining calls to HARP command line tools one can pre-process datasets such that two datasets that need to be compared end up having the same temporal/spatial grid, same data format/structure, and same physical unit. The toolkit comes with its own data format conventions, the HARP format, which is based on netcdf/HDF. Ingestion routines (based on CODA) allow conversion from a wide variety of atmospheric data products to this common format. In addition, the toolbox provides a wide range of operations to perform conversions on the data such as unit conversions, quantity conversions (e.g. number density to volume mixing ratios), regridding, vertical smoothing using averaging kernels, collocation of two datasets, etc. VISAN is a cross-platform visualization and analysis application for atmospheric data and can be used to visualize and analyze the data that you retrieve using the CODA and HARP interfaces. The application uses the Python language as the means through which you provide commands to the application. The Python interfaces for CODA and HARP are included so you can directly ingest product data from within VISAN. Powerful visualization functionality for 2D plots and geographical plots in VISAN will allow you to directly visualize the ingested data. All components from the ESA Atmospheric Toolbox are Open Source and freely available. Software packages can be downloaded from the BEAT website: http://stcorp.nl/beat/
Can walking motions improve visually induced rotational self-motion illusions in virtual reality?
Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y
2015-02-04
Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.
Binocular vision in amblyopia: structure, suppression and plasticity.
Hess, Robert F; Thompson, Benjamin; Baker, Daniel H
2014-03-01
The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P.; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2016-01-01
Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated with greater deficits in amblyopic eye contrast sensitivity and visual acuity. We tested whether transcranial direct current stimulation (tDCS) of the visual cortex would modulate VEP amplitude and contrast sensitivity in adults with amblyopia. tDCS can transiently alter cortical excitability and may influence suppressive neural interactions. Twenty-one patients with amblyopia and twenty-seven controls completed separate sessions of anodal (a-), cathodal (c-) and sham (s-) visual cortex tDCS. A-tDCS transiently and significantly increased VEP amplitudes for amblyopic, fellow and control eyes and contrast sensitivity for amblyopic and control eyes. C-tDCS decreased VEP amplitude and contrast sensitivity and s-tDCS had no effect. These results suggest that tDCS can modulate visual cortex responses to information from adult amblyopic eyes and provide a foundation for future clinical studies of tDCS in adults with amblyopia. PMID:26763954
Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2016-01-14
Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated with greater deficits in amblyopic eye contrast sensitivity and visual acuity. We tested whether transcranial direct current stimulation (tDCS) of the visual cortex would modulate VEP amplitude and contrast sensitivity in adults with amblyopia. tDCS can transiently alter cortical excitability and may influence suppressive neural interactions. Twenty-one patients with amblyopia and twenty-seven controls completed separate sessions of anodal (a-), cathodal (c-) and sham (s-) visual cortex tDCS. A-tDCS transiently and significantly increased VEP amplitudes for amblyopic, fellow and control eyes and contrast sensitivity for amblyopic and control eyes. C-tDCS decreased VEP amplitude and contrast sensitivity and s-tDCS had no effect. These results suggest that tDCS can modulate visual cortex responses to information from adult amblyopic eyes and provide a foundation for future clinical studies of tDCS in adults with amblyopia.
A Bayesian Account of Visual–Vestibular Interactions in the Rod-and-Frame Task
de Brouwer, Anouk J.; Medendorp, W. Pieter
2016-01-01
Abstract Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject’s head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities. PMID:27844055
Small numbers are sensed directly, high numbers constructed from size and density.
Zimmermann, Eckart
2018-04-01
Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.
Cicmil, Nela; Krug, Kristine
2015-01-01
Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making. PMID:26240421
Smartphone-Based Escalator Recognition for the Visually Impaired
Nakamura, Daiki; Takizawa, Hotaka; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji
2017-01-01
It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes. The experimental results demonstrate that the proposed method is promising for helping visually impaired individuals use escalators. PMID:28481270
Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk
2017-02-01
Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Background: Preflight Screening, In-flight Capabilities, and Postflight Testing
NASA Technical Reports Server (NTRS)
Gibson, Charles Robert; Duncan, James
2009-01-01
Recommendations for minimal in-flight capabilities: Retinal Imaging - provide in-flight capability for the visual monitoring of ocular health (specifically, imaging of the retina and optic nerve head) with the capability of downlinking video/still images. Tonometry - provide more accurate and reliable in-flight capability for measuring intraocular pressure. Ultrasound - explore capabilities of current on-board system for monitoring ocular health. We currently have limited in-flight capabilities on board the International Space Station for performing an internal ocular health assessment. Visual Acuity, Direct Ophthalmoscope, Ultrasound, Tonometry(Tonopen):
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Lapolla, M. V.; Horblit, B.
1995-01-01
A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.
Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe
2014-01-01
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
Buschman, Timothy J.; Miller, Earl K.
2009-01-01
Attention regulates the flood of sensory information into a manageable stream, and so understanding how attention is controlled is central to understanding cognition. Competing theories suggest visual search involves serial and/or parallel allocation of attention, but there is little direct, neural, evidence for either mechanism. Two monkeys were trained to covertly search an array for a target stimulus under visual search (endogenous) and pop-out (exogenous) conditions. Here we present neural evidence in the frontal eye fields (FEF) for serial, covert shifts of attention during search but not pop-out. Furthermore, attention shifts reflected in FEF spiking activity were correlated with 18–34 Hz oscillations in the local field potential, suggesting a ‘clocking’ signal. This provides direct neural evidence that primates can spontaneously adopt a serial search strategy and that these serial covert shifts of attention are directed by the FEF. It also suggests that neuron population oscillations may regulate the timing of cognitive processing. PMID:19679077
Escobar, Gina M.; Maffei, Arianna; Miller, Paul
2014-01-01
The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition. PMID:24598528
Use of an augmented-vision device for visual search by patients with tunnel vision
Luo, Gang; Peli, Eli
2006-01-01
Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8º to 11º wide) carried out the search over a 90º×74º area, and nine subjects (VF: 7º to 16º wide) over a 66º×52º area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10º in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136
Visual grouping under isoluminant condition: impact of mental fatigue
NASA Astrophysics Data System (ADS)
Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta
2016-09-01
Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.
Attention biases visual activity in visual short-term memory.
Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina
2014-07-01
In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.
Self-reflection Orients Visual Attention Downward
Liu, Yi; Tong, Yu; Li, Hong
2017-01-01
Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., “I am above others”). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context. PMID:28928694
Self-reflection Orients Visual Attention Downward.
Liu, Yi; Tong, Yu; Li, Hong
2017-01-01
Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., "I am above others"). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context.
75 FR 4823 - Proposed Data Collections Submitted for Public Comment and Recommendations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... Prevention (CDC). Background and Brief Description Colorectal cancer (CRC) is the second leading cause of... procedures currently recommended as colorectal cancer screening tests, provide direct visualization of the... early cancers. Both of these tests require specialized training. Flexible sigmoidoscopy provides a view...
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
The Role of Direct and Visual Force Feedback in Suturing Using a 7-DOF Dual-Arm Teleoperated System.
Talasaz, Ali; Trejos, Ana Luisa; Patel, Rajni V
2017-01-01
The lack of haptic feedback in robotics-assisted surgery can result in tissue damage or accidental tool-tissue hits. This paper focuses on exploring the effect of haptic feedback via direct force reflection and visual presentation of force magnitudes on performance during suturing in robotics-assisted minimally invasive surgery (RAMIS). For this purpose, a haptics-enabled dual-arm master-slave teleoperation system capable of measuring tool-tissue interaction forces in all seven Degrees-of-Freedom (DOFs) was used. Two suturing tasks, tissue puncturing and knot-tightening, were chosen to assess user skills when suturing on phantom tissue. Sixteen subjects participated in the trials and their performance was evaluated from various points of view: force consistency, number of accidental hits with tissue, amount of tissue damage, quality of the suture knot, and the time required to accomplish the task. According to the results, visual force feedback was not very useful during the tissue puncturing task as different users needed different amounts of force depending on the penetration of the needle into the tissue. Direct force feedback, however, was more useful for this task to apply less force and to minimize the amount of damage to the tissue. Statistical results also reveal that both visual and direct force feedback were required for effective knot tightening: direct force feedback could reduce the number of accidental hits with the tissue and also the amount of tissue damage, while visual force feedback could help to securely tighten the suture knots and maintain force consistency among different trials/users. These results provide evidence of the importance of 7-DOF force reflection when performing complex tasks in a RAMIS setting.
Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.
2012-01-01
Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286
[Comparison of noise characteristics of direct and indirect conversion flat panel detectors].
Murai, Masami; Kishimoto, Kenji; Tanaka, Katsuhisa; Oota, Kenji; Ienaga, Akinori
2010-11-20
Flat-panel detector (FPD) digital radiography systems have direct and indirect conversion systems, and the 2 conversion systems provide different imaging performances. We measured some imaging performances [input-output characteristic, presampled modulation transfer function (presampled MTF), noise power spectrum (NPS)] of direct and indirect FPD systems. Moreover, some image samples of the NPSs were visually evaluated by the pair comparison method. As a result, the presampled MTF of the direct FPD system was substantially higher than that of the indirect FPD system. The NPS of the direct FPD system had a high value for all spatial frequencies. In contrast, the NPS of the indirect FPD system had a lower value as the frequency became higher. The results of visual evaluations showed the same tendency as that found for NPSs. We elucidated the cause of the difference in NPSs in a simulation study, and we determined that the cause of the difference in the noise components of the direct and indirect FPD systems was closely related to the presampled MTF.
Visual feedback system to reduce errors while operating roof bolting machines
Steiner, Lisa J.; Burgess-Limerick, Robin; Eiter, Brianna; Porter, William; Matty, Tim
2015-01-01
Problem Operators of roof bolting machines in underground coal mines do so in confined spaces and in very close proximity to the moving equipment. Errors in the operation of these machines can have serious consequences, and the design of the equipment interface has a critical role in reducing the probability of such errors. Methods An experiment was conducted to explore coding and directional compatibility on actual roof bolting equipment and to determine the feasibility of a visual feedback system to alert operators of critical movements and to also alert other workers in close proximity to the equipment to the pending movement of the machine. The quantitative results of the study confirmed the potential for both selection errors and direction errors to be made, particularly during training. Results Subjective data confirmed a potential benefit of providing visual feedback of the intended operations and movements of the equipment. Impact This research may influence the design of these and other similar control systems to provide evidence for the use of warning systems to improve operator situational awareness. PMID:23398703
Scientific Assistant Virtual Laboratory (SAVL)
NASA Astrophysics Data System (ADS)
Alaghband, Gita; Fardi, Hamid; Gnabasik, David
2007-03-01
The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion B.
2011-01-01
Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.
Accessible engineering drawings for visually impaired machine operators.
Ramteke, Deepak; Kansal, Gayatri; Madhab, Benu
2014-01-01
An engineering drawing provides manufacturing information to a machine operator. An operator plans and executes machining operations based on this information. A visually impaired (VI) operator does not have direct access to the drawings. Drawing information is provided to them verbally or by using sample parts. Both methods have limitations that affect the quality of output. Use of engineering drawings is a standard practice for every industry; this hampers employment of a VI operator. Accessible engineering drawings are required to increase both independence, as well as, employability of VI operators. Today, Computer Aided Design (CAD) software is used for making engineering drawings, which are saved in CAD files. Required information is extracted from the CAD files and converted into Braille or voice. The authors of this article propose a method to make engineering drawings information directly accessible to a VI operator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.
Anemometer readings alone cannot provide a complete picture of air flow patterns at an open gloveport. Having a means to visualize air flow for field tests in general provides greater insight by indicating direction in addition to the magnitude of the air flow velocities in the region of interest. Furthermore, flow visualization is essential for Computational Fluid Dynamics (CFD) verification, where important modeling assumptions play a significant role in analyzing the chaotic nature of low-velocity air flow. A good example is shown Figure 1, where an unexpected vortex pattern occurred during a field test that could not have been measuredmore » relying only on anemometer readings. Here by, observing and measuring the patterns of the smoke flowing into the gloveport allowed the CFD model to be appropriately updated to match the actual flow velocities in both magnitude and direction.« less
NASA Astrophysics Data System (ADS)
Vitali, Ettore; Shi, Hao; Qin, Mingpu; Zhang, Shiwei
2017-12-01
Experiments with ultracold atoms provide a highly controllable laboratory setting with many unique opportunities for precision exploration of quantum many-body phenomena. The nature of such systems, with strong interaction and quantum entanglement, makes reliable theoretical calculations challenging. Especially difficult are excitation and dynamical properties, which are often the most directly relevant to experiment. We carry out exact numerical calculations, by Monte Carlo sampling of imaginary-time propagation of Slater determinants, to compute the pairing gap in the two-dimensional Fermi gas from first principles. Applying state-of-the-art analytic continuation techniques, we obtain the spectral function and the density and spin structure factors providing unique tools to visualize the BEC-BCS crossover. These quantities will allow for a direct comparison with experiments.
Ramón, María L; Piñero, David P; Pérez-Cambrodí, Rafael J
2012-02-01
To examine the visual performance of a rotationally asymmetric multifocal intraocular lens (IOL) by correlating the defocus curve of the IOL-implanted eye with the intraocular aberrometric profile and impact on the quality of life. A prospective, consecutive, case series study including 26 eyes from 13 patients aged between 50 and 83 years (mean: 65.54±7.59 years) was conducted. All patients underwent bilateral cataract surgery with implantation of a rotationally asymmetric multifocal IOL (Lentis Mplus LS-312 MF30, Oculentis GmbH). Distance and near visual acuity outcomes, intraocular aberrations, defocus curve, and quality of life (assessed using the National Eye Institute Visual Functioning Questionnaire-25) were evaluated postoperatively (mean follow-up: 6.42±2.24 months). A significant improvement in distance visual acuity was found postoperatively (P<.01). Mean postoperative logMAR distance-corrected near visual acuity was 0.19±0.12 (∼20/30). Corrected distance visual acuity and near visual acuity of 20/20 or better were achieved by 30.8% and 7.7% of eyes, respectively. Of all eyes, 96.2% had a postoperative addition between 0 and 1.00 diopter (D). The defocus curve showed two peaks of maximum visual acuity (0 and 3.00 D of defocus), with an acceptable range of intermediate vision. LogMAR visual acuity corresponding to near defocus was directly correlated with some higher order intraocular aberrations (r⩾0.44, P⩽.04). Some difficulties evaluated with the quality of life test correlated directly with near and intermediate visual acuity (r⩾0.50, P⩽.01). The Lentis Mplus multifocal IOL provides good distance, intermediate, and near visual outcomes; however, the induced intraocular aberrometric profile may limit the potential visual benefit. Copyright 2012, SLACK Incorporated.
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Provide Natural Light | Efficient Windows Collaborative
illumination when desired. Providing Balanced Lighting A balance of light is important both for visual comfort protected from excessive light levels. The balance of light in a space depends on the overall number and furnishings. An improved balance of light can be created by providing light from at least two directions, such
ERIC Educational Resources Information Center
Sturges, Linda W.
2010-01-01
The present study investigated the extent to which providing students with individualized performance feedback informed and directed their learning behavior. Individualized performance feedback was delivered to students using curriculum-based measurement progress indicators, either as a visual representation of ongoing performance in the form of a…
Visualizing deep neural network by alternately image blurring and deblurring.
Wang, Feng; Liu, Haijun; Cheng, Jian
2018-01-01
Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D
2006-01-01
The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.
ERIC Educational Resources Information Center
Hendrickson, Homer
1988-01-01
Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…
Lip Movement Exaggerations During Infant-Directed Speech
Green, Jordan R.; Nip, Ignatius S. B.; Wilson, Erin M.; Mefferd, Antje S.; Yunusova, Yana
2011-01-01
Purpose Although a growing body of literature has indentified the positive effects of visual speech on speech and language learning, oral movements of infant-directed speech (IDS) have rarely been studied. This investigation used 3-dimensional motion capture technology to describe how mothers modify their lip movements when talking to their infants. Method Lip movements were recorded from 25 mothers as they spoke to their infants and other adults. Lip shapes were analyzed for differences across speaking conditions. The maximum fundamental frequency, duration, acoustic intensity, and first and second formant frequency of each vowel also were measured. Results Lip movements were significantly larger during IDS than during adult-directed speech, although the exaggerations were vowel specific. All of the vowels produced during IDS were characterized by an elevated vocal pitch and a slowed speaking rate when compared with vowels produced during adult-directed speech. Conclusion The pattern of lip-shape exaggerations did not provide support for the hypothesis that mothers produce exemplar visual models of vowels during IDS. Future work is required to determine whether the observed increases in vertical lip aperture engender visual and acoustic enhancements that facilitate the early learning of speech. PMID:20699342
Infants learn better from left to right: a directional bias in infants' sequence learning.
Bulf, Hermann; de Hevia, Maria Dolores; Gariboldi, Valeria; Macchi Cassia, Viola
2017-05-26
A wealth of studies show that human adults map ordered information onto a directional spatial continuum. We asked whether mapping ordinal information into a directional space constitutes an early predisposition, already functional prior to the acquisition of symbolic knowledge and language. While it is known that preverbal infants represent numerical order along a left-to-right spatial continuum, no studies have investigated yet whether infants, like adults, organize any kind of ordinal information onto a directional space. We investigated whether 7-month-olds' ability to learn high-order rule-like patterns from visual sequences of geometric shapes was affected by the spatial orientation of the sequences (left-to-right vs. right-to-left). Results showed that infants readily learn rule-like patterns when visual sequences were presented from left to right, but not when presented from right to left. This result provides evidence that spatial orientation critically determines preverbal infants' ability to perceive and learn ordered information in visual sequences, opening to the idea that a left-to-right spatially organized mental representation of ordered dimensions might be rooted in biologically-determined constraints on human brain development.
Three-dimensional device characterization by high-speed cinematography
NASA Astrophysics Data System (ADS)
Maier, Claus; Hofer, Eberhard P.
2001-10-01
Testing of micro-electro-mechanical systems (MEMS) for optimization purposes or reliability checks can be supported by device visualization whenever an optical access is available. The difficulty in such an investigation is the short time duration of dynamical phenomena in micro devices. This paper presents a test setup to visualize movements within MEMS in real-time and in two perpendicular directions. A three-dimensional view is achieved by the combination of a commercial high-speed camera system, which allows to take up to 8 images of the same process with a minimum interframe time of 10 ns for the first direction, with a second visualization system consisting of a highly sensitive CCD camera working with a multiple exposure LED illumination in the perpendicular direction. Well synchronized this provides 3-D information which is treated by digital image processing to correct image distortions and to perform the detection of object contours. Symmetric and asymmetric binary collisions of micro drops are chosen as test experiments, featuring coalescence and surface rupture. Another application shown here is the investigation of sprays produced by an atomizer. The second direction of view is a prerequisite for this measurement to select an intended plane of focus.
Visualization of stratospheric ozone depletion and the polar vortex
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.
1995-01-01
Direct analysis of spacecraft observations of stratospheric ozone yields information about the morphology of annual austral depletion. Visual correlation of ozone with other atmospheric data illustrates the diurnal dynamics of the polar vortex and contributions from the upper troposphere, including the formation and breakup of the depletion region each spring. These data require care in their presentation to minimize the introduction of visualization artifacts that are erroneously interpreted as data features. Non geographically registered data of differing mesh structures can be visually correlated via cartographic warping of base geometries without interpolation. Because this approach is independent of the realization technique, it provides a framework for experimenting with many visualization strategies. This methodology preserves the fidelity of the original data sets in a coordinate system suitable for three-dimensional, dynamic examination of atmospheric phenomena.
Qualitative similarities in the visual short-term memory of pigeons and people.
Gibson, Brett; Wasserman, Edward; Luck, Steven J
2011-10-01
Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.
Engaging Patients With Advance Directives Using an Information Visualization Approach.
Woollen, Janet; Bakken, Suzanne
2016-01-01
Despite the benefits of advance directives (AD) to patients and care providers, they are often not completed due to lack of patient awareness. The purpose of the current article is to advocate for creation and use of an innovative information visualization (infovisual) as a health communication tool aimed at improving AD dissemination and engagement. The infovisual would promote AD awareness by encouraging patients to learn about their options and inspire contemplation and conversation regarding their end-of-life (EOL) journey. An infovisual may be able to communicate insights that are often communicated in words, but are much more powerfully communicated by example. Furthermore, an infovisual could facilitate vivid understanding of options and inspire the beginning of often difficult conversations among care providers, patients, and loved ones. It may also save clinicians time, as care providers may be able to spend less time explaining details of EOL care options. Use of an infovisual could assist in ensuring a well-planned EOL journey. Copyright 2016, SLACK Incorporated.
Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T
2017-09-01
Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
Direct Administration of Nerve-Specific Contrast to Improve Nerve Sparing Radical Prostatectomy
Barth, Connor W.; Gibbs, Summer L.
2017-01-01
Nerve damage remains a major morbidity following nerve sparing radical prostatectomy, significantly affecting quality of life post-surgery. Nerve-specific fluorescence guided surgery offers a potential solution by enhancing nerve visualization intraoperatively. However, the prostate is highly innervated and only the cavernous nerve structures require preservation to maintain continence and potency. Systemic administration of a nerve-specific fluorophore would lower nerve signal to background ratio (SBR) in vital nerve structures, making them difficult to distinguish from all nervous tissue in the pelvic region. A direct administration methodology to enable selective nerve highlighting for enhanced nerve SBR in a specific nerve structure has been developed herein. The direct administration methodology demonstrated equivalent nerve-specific contrast to systemic administration at optimal exposure times. However, the direct administration methodology provided a brighter fluorescent nerve signal, facilitating nerve-specific fluorescence imaging at video rate, which was not possible following systemic administration. Additionally, the direct administration methodology required a significantly lower fluorophore dose than systemic administration, that when scaled to a human dose falls within the microdosing range. Furthermore, a dual fluorophore tissue staining method was developed that alleviates fluorescence background signal from adipose tissue accumulation using a spectrally distinct adipose tissue specific fluorophore. These results validate the use of the direct administration methodology for specific nerve visualization with fluorescence image-guided surgery, which would improve vital nerve structure identification and visualization during nerve sparing radical prostatectomy. PMID:28255352
Direct Administration of Nerve-Specific Contrast to Improve Nerve Sparing Radical Prostatectomy.
Barth, Connor W; Gibbs, Summer L
2017-01-01
Nerve damage remains a major morbidity following nerve sparing radical prostatectomy, significantly affecting quality of life post-surgery. Nerve-specific fluorescence guided surgery offers a potential solution by enhancing nerve visualization intraoperatively. However, the prostate is highly innervated and only the cavernous nerve structures require preservation to maintain continence and potency. Systemic administration of a nerve-specific fluorophore would lower nerve signal to background ratio (SBR) in vital nerve structures, making them difficult to distinguish from all nervous tissue in the pelvic region. A direct administration methodology to enable selective nerve highlighting for enhanced nerve SBR in a specific nerve structure has been developed herein. The direct administration methodology demonstrated equivalent nerve-specific contrast to systemic administration at optimal exposure times. However, the direct administration methodology provided a brighter fluorescent nerve signal, facilitating nerve-specific fluorescence imaging at video rate, which was not possible following systemic administration. Additionally, the direct administration methodology required a significantly lower fluorophore dose than systemic administration, that when scaled to a human dose falls within the microdosing range. Furthermore, a dual fluorophore tissue staining method was developed that alleviates fluorescence background signal from adipose tissue accumulation using a spectrally distinct adipose tissue specific fluorophore. These results validate the use of the direct administration methodology for specific nerve visualization with fluorescence image-guided surgery, which would improve vital nerve structure identification and visualization during nerve sparing radical prostatectomy.
Reaching to virtual targets: The oblique effect reloaded in 3-D.
Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2017-02-20
Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Connecting Swath Satellite Data With Imagery in Mapping Applications
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.
2016-12-01
Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.
SNP ID-info: SNP ID searching and visualization platform.
Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei
2008-09-01
Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.
A direct comparison of short-term audiomotor and visuomotor memory.
Ward, Amanda M; Loucks, Torrey M; Ofori, Edward; Sosnoff, Jacob J
2014-04-01
Audiomotor and visuomotor short-term memory are required for an important variety of skilled movements but have not been compared in a direct manner previously. Audiomotor memory capacity might be greater to accommodate auditory goals that are less directly related to movement outcome than for visually guided tasks. Subjects produced continuous isometric force with the right index finger under auditory and visual feedback. During the first 10 s of each trial, subjects received continuous auditory or visual feedback. For the following 15 s, feedback was removed but the force had to be maintained accurately. An internal effort condition was included to test memory capacity in the same manner but without external feedback. Similar decay times of ~5-6 s were found for vision and audition but the decay time for internal effort was ~4 s. External feedback thus provides an advantage in maintaining a force level after feedback removal, but may not exclude some contribution from a sense of effort. Short-term memory capacity appears longer than certain previous reports but there may not be strong distinctions in capacity across different sensory modalities, at least for isometric force.
Real-time Visualization of Tissue Dynamics during Embryonic Development and Malignant Transformation
NASA Astrophysics Data System (ADS)
Yamada, Kenneth
Tissues undergo dramatic changes in organization during embryonic development, as well as during cancer progression and invasion. Recent advances in microscopy now allow us to visualize and track directly the dynamic movements of tissues, their constituent cells, and cellular substructures. This behavior can now be visualized not only in regular tissue culture on flat surfaces (`2D' environments), but also in a variety of 3D environments that may provide physiological cues relevant to understanding dynamics within living organisms. Acquisition of imaging data using various microscopy modalities will provide rich opportunities for determining the roles of physical factors and for computational modeling of complex processes in living tissues. Direct visualization of real-time motility is providing insight into biology spanning multiple spatio-temporal scales. Many cells in our body are known to be in contact with connective tissue and other forms of extracellular matrix. They do so through microscopic cellular adhesions that bind to matrix proteins. In particular, fluorescence microscopy has revealed that cells dynamically probe and bend the matrix at the sites of cell adhesions, and that 3D matrix architecture, stiffness, and elasticity can each regulate migration of the cells. Conversely, cells remodel their local matrix as organs form or tumors invade. Cancer cells can invade tissues using microscopic protrusions that degrade the surrounding matrix; in this case, the local matrix protein concentration is more important for inducing the micro-invasive protrusions than stiffness. On the length scales of tissues, transiently high rates of individual cell movement appear to help establish organ architecture. In fact, isolated cells can self-organize to form tissue structures. In all of these cases, in-depth real-time visualization will ultimately provide the extensive data needed for computer modeling and for testing hypotheses in which physical forces interact closely with cell signaling to form organs or promote tumor invasion.
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search
Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.
2012-01-01
Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511
Zhang, Suge; Sun, Hongxia; Chen, Hongbo; Li, Qian; Guan, Aijiao; Wang, Lixia; Shi, Yunhua; Xu, Shujuan; Liu, Meirong; Tang, Yalin
2018-05-01
Direct detection of G-quadruplexes in human cells has become an important issue due to the vital role of G-quadruplex related to biological functions. Despite several probes have been developed for detection of the G-quadruplexes in cytoplasm or whole cells, the probe being used to monitor the nucleolar G-quadruplexes is still lacking. Formation of the nucleolar G-quadruplex structures was confirmed by using circular dichroism (CD) spectroscopy. The binding affinity and selectivity of Thioflavin T (ThT) towards various DNA/RNA motifs in solution and gel system were measured by using fluorescence spectroscopy and polyacrylamide gel electrophoresis (PAGE), respectively. G-quadruplex imaging in live cells was directly captured by using confocal laser scanning microscopy (CLSM). Formation of the rDNA and rRNA G-quadruplex structures is demonstrated in vitro. ThT is found to show much higher affinity and selectivity towards these G-quadruplex structures versus other nucleic acid motifs either in solution or in gel system. The nucleolar G-quadruplexes in living cells are visualized by using ThT as a fluorescent probe. G-quadruplex-ligand treatments in live cells lead to sharp decrease of ThT signal. The natural existence of the G-quadruplexes structure in the nucleoli of living cells is directly visualized by using ThT as an indicator. The research provides substantive evidence for formation of the rRNA G-quadruplex structures, and also offers an effective probe for direct visualization of the nucleolar G-quadruplexes in living cells. Copyright © 2018. Published by Elsevier B.V.
The 3D widgets for exploratory scientific visualization
NASA Technical Reports Server (NTRS)
Herndon, Kenneth P.; Meyer, Tom
1995-01-01
Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.
Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex
Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na
2015-01-01
The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604
Long-term Live-cell Imaging to Assess Cell Fate in Response to Paclitaxel.
Bolgioni, Amanda F; Vittoria, Marc A; Ganem, Neil J
2018-05-14
Live-cell imaging is a powerful technique that can be used to directly visualize biological phenomena in single cells over extended periods of time. Over the past decade, new and innovative technologies have greatly enhanced the practicality of live-cell imaging. Cells can now be kept in focus and continuously imaged over several days while maintained under 37 °C and 5% CO2 cell culture conditions. Moreover, multiple fields of view representing different experimental conditions can be acquired simultaneously, thus providing high-throughput experimental data. Live-cell imaging provides a significant advantage over fixed-cell imaging by allowing for the direct visualization and temporal quantitation of dynamic cellular events. Live-cell imaging can also identify variation in the behavior of single cells that would otherwise have been missed using population-based assays. Here, we describe live-cell imaging protocols to assess cell fate decisions following treatment with the anti-mitotic drug paclitaxel. We demonstrate methods to visualize whether mitotically arrested cells die directly from mitosis or slip back into interphase. We also describe how the fluorescent ubiquitination-based cell cycle indicator (FUCCI) system can be used to assess the fraction of interphase cells born from mitotic slippage that are capable of re-entering the cell cycle. Finally, we describe a live-cell imaging method to identify nuclear envelope rupture events.
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
BiNA: A Visual Analytics Tool for Biological Network Data
Gerasch, Andreas; Faber, Daniel; Küntzer, Jan; Niermann, Peter; Kohlbacher, Oliver; Lenhof, Hans-Peter; Kaufmann, Michael
2014-01-01
Interactive visual analysis of biological high-throughput data in the context of the underlying networks is an essential task in modern biomedicine with applications ranging from metabolic engineering to personalized medicine. The complexity and heterogeneity of data sets require flexible software architectures for data analysis. Concise and easily readable graphical representation of data and interactive navigation of large data sets are essential in this context. We present BiNA - the Biological Network Analyzer - a flexible open-source software for analyzing and visualizing biological networks. Highly configurable visualization styles for regulatory and metabolic network data offer sophisticated drawings and intuitive navigation and exploration techniques using hierarchical graph concepts. The generic projection and analysis framework provides powerful functionalities for visual analyses of high-throughput omics data in the context of networks, in particular for the differential analysis and the analysis of time series data. A direct interface to an underlying data warehouse provides fast access to a wide range of semantically integrated biological network databases. A plugin system allows simple customization and integration of new analysis algorithms or visual representations. BiNA is available under the 3-clause BSD license at http://bina.unipax.info/. PMID:24551056
A neural measure of precision in visual working memory.
Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward
2013-05-01
Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.
Effects of cholinergic deafferentation of the rhinal cortex on visual recognition memory in monkeys.
Turchi, Janita; Saunders, Richard C; Mishkin, Mortimer
2005-02-08
Excitotoxic lesion studies have confirmed that the rhinal cortex is essential for visual recognition ability in monkeys. To evaluate the mnemonic role of cholinergic inputs to this cortical region, we compared the visual recognition performance of monkeys given rhinal cortex infusions of a selective cholinergic immunotoxin, ME20.4-SAP, with the performance of monkeys given control infusions into this same tissue. The immunotoxin, which leads to selective cholinergic deafferentation of the infused cortex, yielded recognition deficits of the same magnitude as those produced by excitotoxic lesions of this region, providing the most direct demonstration to date that cholinergic activation of the rhinal cortex is essential for storing the representations of new visual stimuli and thereby enabling their later recognition.
High-level user interfaces for transfer function design with semantics.
Salama, Christof Rezk; Keller, Maik; Kohlmann, Peter
2006-01-01
Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for non-expert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation.
Visual enhancing of tactile perception in the posterior parietal cortex.
Ro, Tony; Wallace, Ruth; Hagedorn, Judith; Farnè, Alessandro; Pienkos, Elizabeth
2004-01-01
The visual modality typically dominates over our other senses. Here we show that after inducing an extreme conflict in the left hand between vision of touch (present) and the feeling of touch (absent), sensitivity to touch increases for several minutes after the conflict. Transcranial magnetic stimulation of the posterior parietal cortex after this conflict not only eliminated the enduring visual enhancement of touch, but also impaired normal tactile perception. This latter finding demonstrates a direct role of the parietal lobe in modulating tactile perception as a result of the conflict between these senses. These results provide evidence for visual-to-tactile perceptual modulation and demonstrate effects of illusory vision of touch on touch perception through a long-lasting modulatory process in the posterior parietal cortex.
Wang, Xiaoying; Peelen, Marius V; Han, Zaizhu; He, Chenxi; Caramazza, Alfonso; Bi, Yanchao
2015-09-09
Classical animal visual deprivation studies and human neuroimaging studies have shown that visual experience plays a critical role in shaping the functionality and connectivity of the visual cortex. Interestingly, recent studies have additionally reported circumscribed regions in the visual cortex in which functional selectivity was remarkably similar in individuals with and without visual experience. Here, by directly comparing resting-state and task-based fMRI data in congenitally blind and sighted human subjects, we obtained large-scale continuous maps of the degree to which connectional and functional "fingerprints" of ventral visual cortex depend on visual experience. We found a close agreement between connectional and functional maps, pointing to a strong interdependence of connectivity and function. Visual experience (or the absence thereof) had a pronounced effect on the resting-state connectivity and functional response profile of occipital cortex and the posterior lateral fusiform gyrus. By contrast, connectional and functional fingerprints in the anterior medial and posterior lateral parts of the ventral visual cortex were statistically indistinguishable between blind and sighted individuals. These results provide a large-scale mapping of the influence of visual experience on the development of both functional and connectivity properties of visual cortex, which serves as a basis for the formulation of new hypotheses regarding the functionality and plasticity of specific subregions. Significance statement: How is the functionality and connectivity of the visual cortex shaped by visual experience? By directly comparing resting-state and task-based fMRI data in congenitally blind and sighted subjects, we obtained large-scale continuous maps of the degree to which connectional and functional "fingerprints" of ventral visual cortex depend on visual experience. In addition to revealing regions that are strongly dependent on visual experience (early visual cortex and posterior fusiform gyrus), our results showed regions in which connectional and functional patterns are highly similar in blind and sighted individuals (anterior medial and posterior lateral ventral occipital temporal cortex). These results serve as a basis for the formulation of new hypotheses regarding the functionality and plasticity of specific subregions of the visual cortex. Copyright © 2015 the authors 0270-6474/15/3512545-15$15.00/0.
Intelligent Visualization of Geo-Information on the Future Web
NASA Astrophysics Data System (ADS)
Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.
2012-04-01
Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.
Hoshi, Eiji
2013-01-01
Action is often executed according to information provided by a visual signal. As this type of behavior integrates two distinct neural representations, perception and action, it has been thought that identification of the neural mechanisms underlying this process will yield deeper insights into the principles underpinning goal-directed behavior. Based on a framework derived from conditional visuomotor association, prior studies have identified neural mechanisms in the dorsal premotor cortex (PMd), dorsolateral prefrontal cortex (dlPFC), ventrolateral prefrontal cortex (vlPFC), and basal ganglia (BG). However, applications resting solely on this conceptualization encounter problems related to generalization and flexibility, essential processes in executive function, because the association mode involves a direct one-to-one mapping of each visual signal onto a particular action. To overcome this problem, we extend this conceptualization and postulate a more general framework, conditional visuo-goal association. According to this new framework, the visual signal identifies an abstract behavioral goal, and an action is subsequently selected and executed to meet this goal. Neuronal activity recorded from the four key areas of the brains of monkeys performing a task involving conditional visuo-goal association revealed three major mechanisms underlying this process. First, visual-object signals are represented primarily in the vlPFC and BG. Second, all four areas are involved in initially determining the goals based on the visual signals, with the PMd and dlPFC playing major roles in maintaining the salience of the goals. Third, the cortical areas play major roles in specifying action, whereas the role of the BG in this process is restrictive. These new lines of evidence reveal that the four areas involved in conditional visuomotor association contribute to goal-directed behavior mediated by conditional visuo-goal association in an area-dependent manner. PMID:24155692
Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research
NASA Astrophysics Data System (ADS)
Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.
2008-12-01
In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.
Map LineUps: Effects of spatial structure on graphical inference.
Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo
2017-01-01
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
Fung, David C Y; Wilkins, Marc R; Hart, David; Hong, Seok-Hee
2010-07-01
The force-directed layout is commonly used in computer-generated visualizations of protein-protein interaction networks. While it is good for providing a visual outline of the protein complexes and their interactions, it has two limitations when used as a visual analysis method. The first is poor reproducibility. Repeated running of the algorithm does not necessarily generate the same layout, therefore, demanding cognitive readaptation on the investigator's part. The second limitation is that it does not explicitly display complementary biological information, e.g. Gene Ontology, other than the protein names or gene symbols. Here, we present an alternative layout called the clustered circular layout. Using the human DNA replication protein-protein interaction network as a case study, we compared the two network layouts for their merits and limitations in supporting visual analysis.
[Artificial sight: recent developments].
Zeitz, O; Keserü, M; Hornig, R; Richard, G
2009-03-01
The implantation of electronic retina stimulators appears to be a future possibility to restore vision, at least partially in patients with retinal degeneration. The idea of such visual prostheses is not new but due to the general technical progress it has become more likely that a functioning implant will be on the market soon. Visual prosthesis may be integrated in the visual system in various places. Thus there are subretinal and epiretinal implants, as well as implants that are connected directly to the optic nerve or the visual cortex. The epiretinal approach is the most promising at the moment, but the problem of appropriate modulation of the image information is unsolved so far. This will be necessary to provide a interpretable visual information to the brain. The present article summarises the concepts and includes some latest information from recent conferences.
Adaptive Acceleration of Visually Evoked Smooth Eye Movements in Mice
2016-01-01
The optokinetic response (OKR) consists of smooth eye movements following global motion of the visual surround, which suppress image slip on the retina for visual acuity. The effective performance of the OKR is limited to rather slow and low-frequency visual stimuli, although it can be adaptably improved by cerebellum-dependent mechanisms. To better understand circuit mechanisms constraining OKR performance, we monitored how distinct kinematic features of the OKR change over the course of OKR adaptation, and found that eye acceleration at stimulus onset primarily limited OKR performance but could be dramatically potentiated by visual experience. Eye acceleration in the temporal-to-nasal direction depended more on the ipsilateral floccular complex of the cerebellum than did that in the nasal-to-temporal direction. Gaze-holding following the OKR was also modified in parallel with eye-acceleration potentiation. Optogenetic manipulation revealed that synchronous excitation and inhibition of floccular complex Purkinje cells could effectively accelerate eye movements in the nasotemporal and temporonasal directions, respectively. These results collectively delineate multiple motor pathways subserving distinct aspects of the OKR in mice and constrain hypotheses regarding cellular mechanisms of the cerebellum-dependent tuning of movement acceleration. SIGNIFICANCE STATEMENT Although visually evoked smooth eye movements, known as the optokinetic response (OKR), have been studied in various species for decades, circuit mechanisms of oculomotor control and adaptation remain elusive. In the present study, we assessed kinematics of the mouse OKR through the course of adaptation training. Our analyses revealed that eye acceleration at visual-stimulus onset primarily limited working velocity and frequency range of the OKR, yet could be dramatically potentiated during OKR adaptation. Potentiation of eye acceleration exhibited different properties between the nasotemporal and temporonasal OKRs, indicating distinct visuomotor circuits underlying the two. Lesions and optogenetic manipulation of the cerebellum provide constraints on neural circuits mediating visually driven eye acceleration and its adaptation. PMID:27335412
A measure of short-term visual memory based on the WISC-R coding subtest.
Collaer, M L; Evans, J R
1982-07-01
Adapted the Coding subtest of the WISC-R to provide a measure of visual memory. Three hundred and five children, aged 8 through 12, were administered the Coding test using standard directions. A few seconds after completion the key was taken away, and each was given a paper with only the digits and asked to write the appropriate matching symbol below each. This was termed "Coding Recall." To provide validity data, a subgroup of 50 Ss also was administered the Attention Span for Letters subtest from the Detroit Tests of Learning Aptitude (as a test of visual memory for sequences of letters) and a Bender Gestalt recall test (as a measure of visual memory for geometric forms). Coding Recall means and standard deviations are reported separately by sex and age level. Implications for clinicans are discussed. Reservations about clinical use of the data are given in view of the possible lack of representativeness of the sample used and the limited reliability and validity of Coding Recall.
Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2015-11-01
The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.
Just one look: Direct gaze briefly disrupts visual working memory.
Wang, J Jessica; Apperly, Ian A
2017-04-01
Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.
The analysis of image motion by the rabbit retina
Oyster, C. W.
1968-01-01
1. Micro-electrode recordings were made from rabbit retinal ganglion cells or their axons. Of particular interest were direction-selective units; the common on—off type represented 20·6% of the total sample (762 units), and the on-type comprised 5% of the total. 2. From the large sample of direction-selective units, it was found that on—off units were maximally sensitive to only four directions of movement; these directions, in the visual field, were, roughly, anterior, superior, posterior and inferior. The on-type units were maximally sensitive to only three directions: anterior, superior and inferior. 3. The direction-selective unit's responses vary with stimulus velocity; both unit types are more sensitive to velocity change than to absolute speed. On—off units respond to movement at speeds from 6′/sec to 10°/sec; the on-type units responded as slowly as 30″/sec up to about 2°/sec. On-type units are clearly slow-movement detectors. 4. The distribution of direction-selective units depends on the retinal locality. On—off units are more common outside the `visual streak' (area centralis) than within it, while the reverse is true for the on-type units. 5. A stimulus configuration was found which would elicit responses from on-type units when the stimulus was moved in the null direction. This `paradoxical response' was shown to be associated with the silent receptive field surround. 6. The four preferred directions of the on—off units were shown to correspond to the directions of retinal image motion produced by contractions of the four rectus eye muscles. This fact, combined with data on velocity sensitivity and retinal distribution of on—off units, suggests that the on—off units are involved in control of reflex eye movements. 7. The on—off direction-selective units may provide error signals to a visual servo system which minimizes retinal image motion. This hypothesis agrees with the known characteristics of the rabbit's visual following reflexes, specifically, the slow phase of optokinetic nystagmus. PMID:5710424
Heberle, Henry; Carazzolle, Marcelo Falsarella; Telles, Guilherme P; Meirelles, Gabriela Vaz; Minghim, Rosane
2017-09-13
The advent of "omics" science has brought new perspectives in contemporary biology through the high-throughput analyses of molecular interactions, providing new clues in protein/gene function and in the organization of biological pathways. Biomolecular interaction networks, or graphs, are simple abstract representations where the components of a cell (e.g. proteins, metabolites etc.) are represented by nodes and their interactions are represented by edges. An appropriate visualization of data is crucial for understanding such networks, since pathways are related to functions that occur in specific regions of the cell. The force-directed layout is an important and widely used technique to draw networks according to their topologies. Placing the networks into cellular compartments helps to quickly identify where network elements are located and, more specifically, concentrated. Currently, only a few tools provide the capability of visually organizing networks by cellular compartments. Most of them cannot handle large and dense networks. Even for small networks with hundreds of nodes the available tools are not able to reposition the network while the user is interacting, limiting the visual exploration capability. Here we propose CellNetVis, a web tool to easily display biological networks in a cell diagram employing a constrained force-directed layout algorithm. The tool is freely available and open-source. It was originally designed for networks generated by the Integrated Interactome System and can be used with networks from others databases, like InnateDB. CellNetVis has demonstrated to be applicable for dynamic investigation of complex networks over a consistent representation of a cell on the Web, with capabilities not matched elsewhere.
Real-time decoding of the direction of covert visuospatial attention
NASA Astrophysics Data System (ADS)
Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.
2012-08-01
Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.
Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John
2016-01-01
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
Laryngeal videostroboscopy in the dog model: a simplified technique and applications
NASA Astrophysics Data System (ADS)
Coleman, John R., Jr.; Reinisch, Lou; Smith, Shane; Deriso, Walter; Ossoff, Jacob; Huang, Shan; Garrett, C. Gaelyn
1998-07-01
Laryngeal videostroboscopy (LVS) allows the physician to examine the vibratory free edge of the vocal fold providing direct visualization of the vocal fold surface and indirect visualization of the substance of the vocal fold. Previously in dog LVS, electrical stimulation of the superior and recurrent laryngeal nerves or painful stimuli in the lightly anesthetized animal provided the impetus for glottic closure. In this paper we present a new technique for LVS in the dog model that involves mechanical traction on arytenoid adduction sutures to achieve vocal fold adduction. This method is safe, effective, and reproducible, and the potential applications are numerous.
STRING 3: An Advanced Groundwater Flow Visualization Tool
NASA Astrophysics Data System (ADS)
Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph
2016-04-01
The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
Thaler, Lore; Goodale, Melvyn A.
2011-01-01
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474
Using basic, easily attainable GIS data, AGWA provides a simple, direct, and repeatable methodology for hydrologic model setup, execution, and visualization. AGWA experiences activity from over 170 countries. It l has been downloaded over 11,000 times.
Bourgeois, Alexia; Neveu, Rémi; Vuilleumier, Patrik
2016-01-01
In order to behave adaptively, attention can be directed in space either voluntarily (i.e., endogenously) according to strategic goals, or involuntarily (i.e., exogenously) through reflexive capture by salient or novel events. The emotional or motivational value of stimuli can also strongly influence attentional orienting. However, little is known about how reward-related effects compete or interact with endogenous and exogenous attention mechanisms, particularly outside of awareness. Here we developed a visual search paradigm to study subliminal value-based attentional orienting. We systematically manipulated goal-directed or stimulus-driven attentional orienting and examined whether an irrelevant, but previously rewarded stimulus could compete with both types of spatial attention during search. Critically, reward was learned without conscious awareness in a preceding phase where one among several visual symbols was consistently paired with a subliminal monetary reinforcement cue. Our results demonstrated that symbols previously associated with a monetary reward received higher attentional priority in the subsequent visual search task, even though these stimuli and reward were no longer task-relevant, and despite reward being unconsciously acquired. Thus, motivational processes operating independent of conscious awareness may provide powerful influences on mechanisms of attentional selection, which could mitigate both stimulus-driven and goal-directed shifts of attention. PMID:27483371
Neurophysiological intraoperative monitoring during an optic nerve schwannoma removal.
San-Juan, Daniel; Escanio Cortés, Manuel; Tena-Suck, Martha; Orozco Garduño, Adolfo Josué; López Pizano, Jesús Alejandro; Villanueva Domínguez, Jonathan; Fernández Gónzalez-Aragón, Maricarmen; Gómez-Amador, Juan Luis
2017-10-01
This paper reports the case of a patient with optic nerve schwannoma and the first use of neurophysiological intraoperative monitoring of visual evoked potentials during the removal of such tumor with no postoperative visual damage. Schwannomas are benign neoplasms of the peripheral nervous system arising from the neural crest-derived Schwann cells, these tumors are rarely located in the optic nerve and the treatment consists on surgical removal leading to high risk of damage to the visual pathway. Case report of a thirty-year-old woman with an optic nerve schwannoma. The patient underwent surgery for tumor removal on the left optic nerve through a left orbitozygomatic approach with intraoperative monitoring of left II and III cranial nerves. We used Nicolet Endeavour CR IOM (Carefusion, Middleton WI, USA) to performed visual evoked potentials stimulating binocularly with LED flash goggles with the patient´s eyes closed and direct epidural optic nerve stimulation delivering rostral to the tumor a rectangular current pulse. At follow up examinations 7 months later, the left eye visual acuity was 20/60; Ishihara score was 8/8 in both eyes; the right eye photomotor reflex was normal and left eye was mydriatic and arreflectic; optokinetic reflex and ocular conjugate movements were normal. In this case, the epidural direct electrical stimulation of optic nerve provided stable waveforms during optic nerve schwannoma resection without visual loss.
Imitation and matching of meaningless gestures: distinct involvement from motor and visual imagery.
Lesourd, Mathieu; Navarro, Jordan; Baumard, Josselin; Jarry, Christophe; Le Gall, Didier; Osiurak, François
2017-05-01
The aim of the present study was to understand the underlying cognitive processes of imitation and matching of meaningless gestures. Neuropsychological evidence obtained in brain damaged patients, has shown that distinct cognitive processes supported imitation and matching of meaningless gestures. Left-brain damaged (LBD) patients failed to imitate while right-brain damaged (RBD) patients failed to match meaningless gestures. Moreover, other studies with brain damaged patients showed that LBD patients were impaired in motor imagery while RBD patients were impaired in visual imagery. Thus, we hypothesize that imitation of meaningless gestures might rely on motor imagery, whereas matching of meaningless gestures might be based on visual imagery. In a first experiment, using a correlational design, we demonstrated that posture imitation relies on motor imagery but not on visual imagery (Experiment 1a) and that posture matching relies on visual imagery but not on motor imagery (Experiment 1b). In a second experiment, by manipulating directly the body posture of the participants, we demonstrated that such manipulation evokes a difference only in imitation task but not in matching task. In conclusion, the present study provides direct evidence that the way we imitate or we have to compare postures depends on motor imagery or visual imagery, respectively. Our results are discussed in the light of recent findings about underlying mechanisms of meaningful and meaningless gestures.
Perceived state of self during motion can differentially modulate numerical magnitude allocation.
Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M
2016-09-01
Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Transcranial direct current stimulation enhances recovery of stereopsis in adults with amblyopia.
Spiegel, Daniel P; Li, Jinrong; Hess, Robert F; Byblow, Winston D; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2013-10-01
Amblyopia is a neurodevelopmental disorder of vision caused by abnormal visual experience during early childhood that is often considered to be untreatable in adulthood. Recently, it has been shown that a novel dichoptic videogame-based treatment for amblyopia can improve visual function in adult patients, at least in part, by reducing inhibition of inputs from the amblyopic eye to the visual cortex. Non-invasive anodal transcranial direct current stimulation has been shown to reduce the activity of inhibitory cortical interneurons when applied to the primary motor or visual cortex. In this double-blind, sham-controlled cross-over study we tested the hypothesis that anodal transcranial direct current stimulation of the visual cortex would enhance the therapeutic effects of dichoptic videogame-based treatment. A homogeneous group of 16 young adults (mean age 22.1 ± 1.1 years) with amblyopia were studied to compare the effect of dichoptic treatment alone and dichoptic treatment combined with visual cortex direct current stimulation on measures of binocular (stereopsis) and monocular (visual acuity) visual function. The combined treatment led to greater improvements in stereoacuity than dichoptic treatment alone, indicating that direct current stimulation of the visual cortex boosts the efficacy of dichoptic videogame-based treatment. This intervention warrants further evaluation as a novel therapeutic approach for adults with amblyopia.
NASA Astrophysics Data System (ADS)
Mann, Christopher; Narasimhamurthi, Natarajan
1998-08-01
This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.
Emergency vehicle alert system, phase 2
NASA Technical Reports Server (NTRS)
Barr, Tom; Harper, Warren; Reed, Bill; Wallace, David
1993-01-01
The EVAS provides warning for hearing-impaired motor vehicle drivers that an emergency vehicle is in the local vicinity. Direction and distance to the emergency vehicle are presented visually to the driver. This is accomplished by a special RF transmission/reception system. During this phase the receiver and transmitter from Phase 1 were updated and modified and a directional antenna developed. The system was then field tested with good results. Static and dynamic (moving vehicle) tests were made with the direction determined correctly 98 percent of the time.
Top-down influence on the visual cortex of the blind during sensory substitution
Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.
2017-01-01
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776
Effects of kinesthetic and cutaneous stimulation during the learning of a viscous force field.
Rosati, Giulio; Oscari, Fabio; Pacchierotti, Claudio; Prattichizzo, Domenico
2014-01-01
Haptic stimulation can help humans learn perceptual motor skills, but the precise way in which it influences the learning process has not yet been clarified. This study investigates the role of the kinesthetic and cutaneous components of haptic feedback during the learning of a viscous curl field, taking also into account the influence of visual feedback. We present the results of an experiment in which 17 subjects were asked to make reaching movements while grasping a joystick and wearing a pair of cutaneous devices. Each device was able to provide cutaneous contact forces through a moving platform. The subjects received visual feedback about joystick's position. During the experiment, the system delivered a perturbation through (1) full haptic stimulation, (2) kinesthetic stimulation alone, (3) cutaneous stimulation alone, (4) altered visual feedback, or (5) altered visual feedback plus cutaneous stimulation. Conditions 1, 2, and 3 were also tested with the cancellation of the visual feedback of position error. Results indicate that kinesthetic stimuli played a primary role during motor adaptation to the viscous field, which is a fundamental premise to motor learning and rehabilitation. On the other hand, cutaneous stimulation alone appeared not to bring significant direct or adaptation effects, although it helped in reducing direct effects when used in addition to kinesthetic stimulation. The experimental conditions with visual cancellation of position error showed slower adaptation rates, indicating that visual feedback actively contributes to the formation of internal models. However, modest learning effects were detected when the visual information was used to render the viscous field.
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
Matsunaka, Kumiko; Shibata, Yuki; Yamamoto, Toshikazu
2008-08-01
Study 1 investigated individual differences in spatial cognition amongst visually impaired students and sighted controls, as well as the extent to which visual status contributes to these individual differences. Fifty-eight visually impaired and 255 sighted university students evaluated their sense of direction via self-ratings. Visual impairment contributed to the factors associated with the use and understanding of maps, confirming that maps are generally unfamiliar to visually impaired people. The relationship between psychological stress associated with mobility and individual differences in sense of direction was investigated in Study 2. A stress checklist was administered to the 51 visually impaired students who participated in Study 1. Psychological stress level was related to understanding and use of maps, as well as orientation and renewal, that is, course correction after being got lost. Central visual field deficits were associated with greater mobility-related stress levels than peripheral visual field deficits.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk
2008-07-15
The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.
Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H.; Öğmen, Haluk
2008-01-01
The 1990s, the “decade of the brain,” witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this “steady-state approach,” more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness. PMID:20517493
CollaborationViz: Interactive Visual Exploration of Biomedical Research Collaboration Networks
Bian, Jiang; Xie, Mengjun; Hudson, Teresa J.; Eswaran, Hari; Brochhausen, Mathias; Hanna, Josh; Hogan, William R.
2014-01-01
Social network analysis (SNA) helps us understand patterns of interaction between social entities. A number of SNA studies have shed light on the characteristics of research collaboration networks (RCNs). Especially, in the Clinical Translational Science Award (CTSA) community, SNA provides us a set of effective tools to quantitatively assess research collaborations and the impact of CTSA. However, descriptive network statistics are difficult for non-experts to understand. In this article, we present our experiences of building meaningful network visualizations to facilitate a series of visual analysis tasks. The basis of our design is multidimensional, visual aggregation of network dynamics. The resulting visualizations can help uncover hidden structures in the networks, elicit new observations of the network dynamics, compare different investigators and investigator groups, determine critical factors to the network evolution, and help direct further analyses. We applied our visualization techniques to explore the biomedical RCNs at the University of Arkansas for Medical Sciences – a CTSA institution. And, we created CollaborationViz, an open-source visual analytical tool to help network researchers and administration apprehend the network dynamics of research collaborations through interactive visualization. PMID:25405477
Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.
Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing
2018-03-28
The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.
NASA Astrophysics Data System (ADS)
Tavakkol, Sasan; Lynett, Patrick
2017-08-01
In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.
Sounds activate visual cortex and improve visual discrimination.
Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A
2014-07-16
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.
The visual white matter: The application of diffusion MRI and fiber tractography to vision science
Rokem, Ariel; Takemura, Hiromasa; Bock, Andrew S.; Scherf, K. Suzanne; Behrmann, Marlene; Wandell, Brian A.; Fine, Ione; Bridge, Holly; Pestilli, Franco
2017-01-01
Visual neuroscience has traditionally focused much of its attention on understanding the response properties of single neurons or neuronal ensembles. The visual white matter and the long-range neuronal connections it supports are fundamental in establishing such neuronal response properties and visual function. This review article provides an introduction to measurements and methods to study the human visual white matter using diffusion MRI. These methods allow us to measure the microstructural and macrostructural properties of the white matter in living human individuals; they allow us to trace long-range connections between neurons in different parts of the visual system and to measure the biophysical properties of these connections. We also review a range of findings from recent studies on connections between different visual field maps, the effects of visual impairment on the white matter, and the properties underlying networks that process visual information supporting visual face recognition. Finally, we discuss a few promising directions for future studies. These include new methods for analysis of MRI data, open datasets that are becoming available to study brain connectivity and white matter properties, and open source software for the analysis of these data. PMID:28196374
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.
Stolper, Charles D; Perer, Adam; Gotz, David
2014-12-01
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
Visual ergonomics and computer work--is it all about computer glasses?
Jonsson, Christina
2012-01-01
The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.
Fan, Zhao; Harris, John
2010-10-12
In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Visual direction finding by fishes
NASA Technical Reports Server (NTRS)
Waterman, T. H.
1972-01-01
The use of visual orientation, in the absence of landmarks, for underwater direction finding exercises by fishes is reviewed. Celestial directional clues observed directly near the water surface or indirectly at an asymptatic depth are suggested as possible orientation aids.
Microreact: visualizing and sharing data for genomic epidemiology and phylogeography
Argimón, Silvia; Abudahab, Khalil; Goater, Richard J. E.; Fedosejev, Artemij; Bhai, Jyothish; Glasner, Corinna; Feil, Edward J.; Holden, Matthew T. G.; Yeats, Corin A.; Grundmann, Hajo; Spratt, Brian G.
2016-01-01
Visualization is frequently used to aid our interpretation of complex datasets. Within microbial genomics, visualizing the relationships between multiple genomes as a tree provides a framework onto which associated data (geographical, temporal, phenotypic and epidemiological) are added to generate hypotheses and to explore the dynamics of the system under investigation. Selected static images are then used within publications to highlight the key findings to a wider audience. However, these images are a very inadequate way of exploring and interpreting the richness of the data. There is, therefore, a need for flexible, interactive software that presents the population genomic outputs and associated data in a user-friendly manner for a wide range of end users, from trained bioinformaticians to front-line epidemiologists and health workers. Here, we present Microreact, a web application for the easy visualization of datasets consisting of any combination of trees, geographical, temporal and associated metadata. Data files can be uploaded to Microreact directly via the web browser or by linking to their location (e.g. from Google Drive/Dropbox or via API), and an integrated visualization via trees, maps, timelines and tables provides interactive querying of the data. The visualization can be shared as a permanent web link among collaborators, or embedded within publications to enable readers to explore and download the data. Microreact can act as an end point for any tool or bioinformatic pipeline that ultimately generates a tree, and provides a simple, yet powerful, visualization method that will aid research and discovery and the open sharing of datasets. PMID:28348833
Cue-recruitment for extrinsic signals after training with low information stimuli.
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T
2014-01-01
Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Emotion and Perception: The Role of Affective Information
Zadra, Jonathan R.; Clore, Gerald L.
2011-01-01
Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565
Keil, Andreas; Sabatinelli, Dean; Ding, Mingzhou; Lang, Peter J.; Ihssen, Niklas; Heim, Sabine
2013-01-01
Re-entrant modulation of visual cortex has been suggested as a critical process for enhancing perception of emotionally arousing visual stimuli. This study explores how the time information inherent in large-scale electrocortical measures can be used to examine the functional relationships among the structures involved in emotional perception. Granger causality analysis was conducted on steady-state visual evoked potentials elicited by emotionally arousing pictures flickering at a rate of 10 Hz. This procedure allows one to examine the direction of neural connections. Participants viewed pictures that varied in emotional content, depicting people in neutral contexts, erotica, or interpersonal attack scenes. Results demonstrated increased coupling between visual and cortical areas when viewing emotionally arousing content. Specifically, intraparietal to inferotemporal and precuneus to calcarine connections were stronger for emotionally arousing picture content. Thus, we provide evidence for re-entrant signal flow during emotional perception, which originates from higher tiers and enters lower tiers of visual cortex. PMID:18095279
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion
Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer
2017-01-01
Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537
Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.
Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer
2017-01-01
Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.
The economic burden of visual impairment and blindness: a systematic review.
Köberlein, Juliane; Beifus, Karolina; Schaffert, Corinna; Finger, Robert P
2013-11-07
Visual impairment and blindness (VI&B) cause a considerable and increasing economic burden in all high-income countries due to population ageing. Thus, we conducted a review of the literature to better understand all relevant costs associated with VI&B and to develop a multiperspective overview. Systematic review: Two independent reviewers searched the relevant literature and assessed the studies for inclusion and exclusion criteria as well as quality. Interventional, non-interventional and cost of illness studies, conducted prior to May 2012, investigating direct and indirect costs as well as intangible effects related to visual impairment and blindness were included. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement approach to identify the relevant studies. A meta-analysis was not performed due to the variability of the reported cost categories and varying definition of visual impairment. A total of 22 studies were included. Hospitalisation and use of medical services around diagnosis and treatment at the onset of VI&B were the largest contributor to direct medical costs. The mean annual expenses per patient were found to be US$ purchasing power parities (PPP) 12 175-14 029 for moderate visual impairment, US$ PPP 13 154-16 321 for severe visual impairment and US$ PPP 14 882-24 180 for blindness, almost twofold the costs for non-blind patients. Informal care was the major contributor to other direct costs, with the time spent by caregivers increasing from 5.8 h/week (or US$ PPP 263) for persons with vision >20/32 up to 94.1 h/week (or US$ PPP 55 062) for persons with vision ≤20/250. VI&B caused considerable indirect costs due to productivity losses, premature mortality and dead-weight losses. VI&B cause a considerable economic burden for affected persons, their caregivers and society at large, which increases with the degree of visual impairment. This review provides insight into the distribution of costs and the economic impact of VI&B.
Causal evidence for retina dependent and independent visual motion computations in mouse cortex
Hillier, Daniel; Fiscella, Michele; Drinnenberg, Antonia; Trenholm, Stuart; Rompani, Santiago B.; Raics, Zoltan; Katona, Gergely; Juettner, Josephine; Hierlemann, Andreas; Rozsa, Balazs; Roska, Botond
2017-01-01
How neuronal computations in the sensory periphery contribute to computations in the cortex is not well understood. We examined this question in the context of visual-motion processing in the retina and primary visual cortex (V1) of mice. We disrupted retinal direction selectivity – either exclusively along the horizontal axis using FRMD7 mutants or along all directions by ablating starburst amacrine cells – and monitored neuronal activity in layer 2/3 of V1 during stimulation with visual motion. In control mice, we found an overrepresentation of cortical cells preferring posterior visual motion, the dominant motion direction an animal experiences when it moves forward. In mice with disrupted retinal direction selectivity, the overrepresentation of posterior-motion-preferring cortical cells disappeared, and their response at higher stimulus speeds was reduced. This work reveals the existence of two functionally distinct, sensory-periphery-dependent and -independent computations of visual motion in the cortex. PMID:28530661
Kumru, Hatice; Pelayo, Raul; Vidal, Joan; Tormos, Josep Maria; Fregni, Felipe; Navarro, Xavier; Pascual-Leone, Alvaro
2010-01-01
The aim of this study was to evaluate the analgesic effect of transcranial direct current stimulation of the motor cortex and techniques of visual illusion, applied isolated or combined, in patients with neuropathic pain following spinal cord injury. In a sham controlled, double-blind, parallel group design, 39 patients were randomized into four groups receiving transcranial direct current stimulation with walking visual illusion or with control illusion and sham stimulation with visual illusion or with control illusion. For transcranial direct current stimulation, the anode was placed over the primary motor cortex. Each patient received ten treatment sessions during two consecutive weeks. Clinical assessment was performed before, after the last day of treatment, after 2 and 4 weeks follow-up and after 12 weeks. Clinical assessment included overall pain intensity perception, Neuropathic Pain Symptom Inventory and Brief Pain Inventory. The combination of transcranial direct current stimulation and visual illusion reduced the intensity of neuropathic pain significantly more than any of the single interventions. Patients receiving transcranial direct current stimulation and visual illusion experienced a significant improvement in all pain subtypes, while patients in the transcranial direct current stimulation group showed improvement in continuous and paroxysmal pain, and those in the visual illusion group improved only in continuous pain and dysaesthesias. At 12 weeks after treatment, the combined treatment group still presented significant improvement on the overall pain intensity perception, whereas no improvements were reported in the other three groups. Our results demonstrate that transcranial direct current stimulation and visual illusion can be effective in the management of neuropathic pain following spinal cord injury, with minimal side effects and with good tolerability. PMID:20685806
Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834
Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.
The Dorsal Visual System Predicts Future and Remembers Past Eye Position
Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart
2016-01-01
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617
Experience Report: Visual Programming in the Real World
NASA Technical Reports Server (NTRS)
Baroth, E.; Hartsough, C
1994-01-01
This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.
Direct Manipulation in Virtual Reality
NASA Technical Reports Server (NTRS)
Bryson, Steve
2003-01-01
Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.
Rallis, Austin; Fercho, Kelene A; Bosch, Taylor J; Baugh, Lee A
2018-01-31
Tool use is associated with three visual streams-dorso-dorsal, ventro-dorsal, and ventral visual streams. These streams are involved in processing online motor planning, action semantics, and tool semantics features, respectively. Little is known about the way in which the brain represents virtual tools. To directly assess this question, a virtual tool paradigm was created that provided the ability to manipulate tool components in isolation of one another. During functional magnetic resonance imaging (fMRI), adult participants performed a series of virtual tool manipulation tasks in which vision and movement kinematics of the tool were manipulated. Reaction time and hand movement direction were monitored while the tasks were performed. Functional imaging revealed that activity within all three visual streams was present, in a similar pattern to what would be expected with physical tool use. However, a previously unreported network of right-hemisphere activity was found including right inferior parietal lobule, middle and superior temporal gyri and supramarginal gyrus - regions well known to be associated with tool processing within the left hemisphere. These results provide evidence that both virtual and physical tools are processed within the same brain regions, though virtual tools recruit bilateral tool processing regions to a greater extent than physical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vision for perception and vision for action in the primate brain.
Goodale, M A
1998-01-01
Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Basic quantitative assessment of visual performance in patients with very low vision.
Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert
2010-02-01
A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.
Top-down influence on the visual cortex of the blind during sensory substitution.
Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C
2016-01-15
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.
Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2014-01-01
Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132
Selective attention modulates the direction of audio-visual temporal recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2014-01-01
Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.
Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields.
Stevens, Andrew H; Butkiewicz, Thomas; Ware, Colin
2017-01-01
Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.
Focal damage to macaque photoreceptors produces persistent visual loss
Strazzeri, Jennifer M.; Hunter, Jennifer J.; Masella, Benjamin D.; Yin, Lu; Fischer, William S.; DiLoreto, David A.; Libby, Richard T.; Williams, David R.; Merigan, William H.
2014-01-01
Insertion of light-gated channels into inner retina neurons restores neural light responses, light evoked potentials, visual optomotor responses and visually-guided maze behavior in mice blinded by retinal degeneration. This method of vision restoration bypasses damaged outer retina, providing stimulation directly to retinal ganglion cells in inner retina. The approach is similar to that of electronic visual protheses, but may offer some advantages, such as avoidance of complex surgery and direct targeting of many thousands of neurons. However, the promise of this technique for restoring human vision remains uncertain because rodent animal models, in which it has been largely developed, are not ideal for evaluating visual perception. On the other hand, psychophysical vision studies in macaque can be used to evaluate different approaches to vision restoration in humans. Furthermore, it has not been possible to test vision restoration in macaques, the optimal model for human-like vision, because there has been no macaque model of outer retina degeneration. In this study, we describe development of a macaque model of photoreceptor degeneration that can in future studies be used to test restoration of perception by visual prostheses. Our results show that perceptual deficits caused by focal light damage are restricted to locations at which photoreceptors are damaged, that optical coherence tomography (OCT) can be used to track such lesions, and that adaptive optics retinal imaging, which we recently used for in vivo recording of ganglion cell function, can be used in future studies to examine these lesions. PMID:24316158
NASA Technical Reports Server (NTRS)
Wehner, R.
1972-01-01
Experimental data, on the visual orientation of desert ants toward astromenotactic courses and horizon landmarks involving the cooperation of different direction finding systems, are given. Attempts were made to: (1) determine if the ants choose a compromise direction between astromenotactic angles and the direction toward horizon landmarks when both angles compete with each other or whether they decide alternatively; (2) analyze adaptations of the visual system to the special demands of direction finding by astromenotactic orientation or pattern recognition; and (3) determine parameters of visual learning behavior. Results show separate orientation mechanisms are responsible for the orientation of the ant toward astromenotactic angles and horizon landmarks. If both systems compete with each other, the ants switch over from one system to the other and do not perform a compromise direction.
Perea, Daniel E.; Liu, Jia; Bartrand, Jonah A. G.; ...
2016-02-29
In this study, we report the atomic-scale analysis of biological interfaces using atom probe tomography. Embedding the protein ferritin in an organic polymer resin lacking nitrogen provided chemical contrast to visualize atomic distributions and distinguish organic-organic and organic-inorganic interfaces. The sample preparation method can be directly extended to further enhance the study of biological, organic and inorganic nanomaterials relevant to health, energy or the environment.
Direct imaging of isofrequency contours in photonic structures
Regan, E. C.; Igarashi, Y.; Zhen, B.; ...
2016-11-25
The isofrequency contours of a photonic crystal are important for predicting and understanding exotic optical phenomena that are not apparent from high-symmetry band structure visualizations. We demonstrate a method to directly visualize the isofrequency contours of high-quality photonic crystal slabs that show quantitatively good agreement with numerical results throughout the visible spectrum. Our technique relies on resonance-enhanced photon scattering from generic fabrication disorder and surface roughness, so it can be applied to general photonic and plasmonic crystals or even quasi-crystals. We also present an analytical model of the scattering process, which explains the observation of isofrequency contours in our technique.more » Furthermore, the isofrequency contours provide information about the characteristics of the disorder and therefore serve as a feedback tool to improve fabrication processes.« less
The look of royalty: visual and odour signals of reproductive status in a paper wasp
Tannure-Nascimento, Ivelize C; Nascimento, Fabio S; Zucchi, Ronaldo
2008-01-01
Reproductive conflicts within animal societies occur when all females can potentially reproduce. In social insects, these conflicts are regulated largely by behaviour and chemical signalling. There is evidence that presence of signals, which provide direct information about the quality of the reproductive females would increase the fitness of all parties. In this study, we present an association between visual and chemical signals in the paper wasp Polistes satan. Our results showed that in nest-founding phase colonies, variation of visual signals is linked to relative fertility, while chemical signals are related to dominance status. In addition, experiments revealed that higher hierarchical positions were occupied by subordinates with distinct proportions of cuticular hydrocarbons and distinct visual marks. Therefore, these wasps present cues that convey reliable information of their reproductive status. PMID:18682372
The look of royalty: visual and odour signals of reproductive status in a paper wasp.
Tannure-Nascimento, Ivelize C; Nascimento, Fabio S; Zucchi, Ronaldo
2008-11-22
Reproductive conflicts within animal societies occur when all females can potentially reproduce. In social insects, these conflicts are regulated largely by behaviour and chemical signalling. There is evidence that presence of signals, which provide direct information about the quality of the reproductive females would increase the fitness of all parties. In this study, we present an association between visual and chemical signals in the paper wasp Polistes satan. Our results showed that in nest-founding phase colonies, variation of visual signals is linked to relative fertility, while chemical signals are related to dominance status. In addition, experiments revealed that higher hierarchical positions were occupied by subordinates with distinct proportions of cuticular hydrocarbons and distinct visual marks. Therefore, these wasps present cues that convey reliable information of their reproductive status.
Ultrasound Imaging in Teaching Cardiac Physiology
ERIC Educational Resources Information Center
Johnson, Christopher D.; Montgomery, Laura E. A.; Quinn, Joe G.; Roe, Sean M.; Stewart, Michael T.; Tansey, Etain A.
2016-01-01
This laboratory session provides hands-on experience for students to visualize the beating human heart with ultrasound imaging. Simple views are obtained from which students can directly measure important cardiac dimensions in systole and diastole. This allows students to derive, from first principles, important measures of cardiac function, such…
Quantitative analysis of diffusion tensor orientation: theoretical framework.
Wu, Yu-Chien; Field, Aaron S; Chung, Moo K; Badie, Benham; Alexander, Andrew L
2004-11-01
Diffusion-tensor MRI (DT-MRI) yields information about the magnitude, anisotropy, and orientation of water diffusion of brain tissues. Although white matter tractography and eigenvector color maps provide visually appealing displays of white matter tract organization, they do not easily lend themselves to quantitative and statistical analysis. In this study, a set of visual and quantitative tools for the investigation of tensor orientations in the human brain was developed. Visual tools included rose diagrams, which are spherical coordinate histograms of the major eigenvector directions, and 3D scatterplots of the major eigenvector angles. A scatter matrix of major eigenvector directions was used to describe the distribution of major eigenvectors in a defined anatomic region. A measure of eigenvector dispersion was developed to describe the degree of eigenvector coherence in the selected region. These tools were used to evaluate directional organization and the interhemispheric symmetry of DT-MRI data in five healthy human brains and two patients with infiltrative diseases of the white matter tracts. In normal anatomical white matter tracts, a high degree of directional coherence and interhemispheric symmetry was observed. The infiltrative diseases appeared to alter the eigenvector properties of affected white matter tracts, showing decreased eigenvector coherence and interhemispheric symmetry. This novel approach distills the rich, 3D information available from the diffusion tensor into a form that lends itself to quantitative analysis and statistical hypothesis testing. (c) 2004 Wiley-Liss, Inc.
The visual light field in real scenes
Xia, Ling; Pont, Sylvia C.; Heynderickx, Ingrid
2014-01-01
Human observers' ability to infer the light field in empty space is known as the “visual light field.” While most relevant studies were performed using images on computer screens, we investigate the visual light field in a real scene by using a novel experimental setup. A “probe” and a scene were mixed optically using a semitransparent mirror. Twenty participants were asked to judge whether the probe fitted the scene with regard to the illumination intensity, direction, and diffuseness. Both smooth and rough probes were used to test whether observers use the additional cues for the illumination direction and diffuseness provided by the 3D texture over the rough probe. The results confirmed that observers are sensitive to the intensity, direction, and diffuseness of the illumination also in real scenes. For some lighting combinations on scene and probe, the awareness of a mismatch between the probe and scene was found to depend on which lighting condition was on the scene and which on the probe, which we called the “swap effect.” For these cases, the observers judged the fit to be better if the average luminance of the visible parts of the probe was closer to the average luminance of the visible parts of the scene objects. The use of a rough instead of smooth probe was found to significantly improve observers' abilities to detect mismatches in lighting diffuseness and directions. PMID:25926970
Visual Communication and Cognition in Everyday Decision-Making.
Jaenichen, Claudine
2017-01-01
Understanding cognition and the context of decision-making should be prioritized in the design process in order to accurately anticipate the outcome for intended audiences. A thorough understanding of cognition has been excluded from being a part of foundational design principals in visual communication. By defining leisure, direct, urgent, and emergency scenarios and providing examples of work that deeply considers the viewer's relationship to the design solution in context of these scenarios allows us to affirm the relevancy of cognition as a design variable and the importance of projects that advocate public utility.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2011-01-01
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
Clinical and Laboratory Evaluation of Peripheral Prism Glasses for Hemianopia
Giorgi, Robert G.; Woods, Russell L.; Peli, Eli
2008-01-01
Purpose Homonymous hemianopia (the loss of vision on the same side in each eye) impairs the ability to navigate and walk safely. We evaluated peripheral prism glasses as a low vision optical device for hemianopia in an extended wearing trial. Methods Twenty-three patients with complete hemianopia (13 right) with neither visual neglect nor cognitive deficit enrolled in the 5-visit study. To expand the horizontal visual field, patients’ spectacles were fitted with both upper and lower Press-On™ Fresnel prism segments (each 40 prism diopters) across the upper and lower portions of the lens on the hemianopic (“blind”) side. Patients were asked to wear these spectacles as much as possible for the duration of the study, which averaged 9 (range: 5 to 13) weeks. Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived quality of life were measured. Results Clinical Success: 14 of 21 (67%) patients chose to continue to wear the peripheral prism glasses at the end of the study (2 patients did not complete the study for non-vision reasons). At long-term follow-up (8 to 51 months), 5 of 12 (42%) patients reported still wearing the device. Visual Field Expansion: Expansion of about 22 degrees in both the upper and lower quadrants was demonstrated for all patients (binocular perimetry, Goldmann V4e). Perceived Direction: Two patients demonstrated a transient adaptation to the change in visual direction produced by the peripheral prism glasses. Quality of Life: At study end, reduced difficulty noticing obstacles on the hemianopic side was reported. Conclusions The peripheral prism glasses provided reported benefits (usually in obstacle avoidance) to 2/3 of the patients completing the study, a very good success rate for a vision rehabilitation device. Possible reasons for long-term discontinuation and limited adaptation of perceived direction are discussed. PMID:19357552
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex
McMains, Stephanie; Kastner, Sabine
2011-01-01
Multiple stimuli present in the visual field at the same time compete for neural representation by mutually suppressing their evoked activity throughout visual cortex, providing a neural correlate for the limited processing capacity of the visual system. Competitive interactions among stimuli can be counteracted by top-down, goal-directed mechanisms such as attention, and by bottom-up, stimulus-driven mechanisms. Because these two processes cooperate in everyday life to bias processing toward behaviorally relevant or particularly salient stimuli, it has proven difficult to study interactions between top-down and bottom-up mechanisms. Here, we used an experimental paradigm in which we first isolated the effects of a bottom-up influence on neural competition by parametrically varying the degree of perceptual grouping in displays that were not attended. Second, we probed the effects of directed attention on the competitive interactions induced with the parametric design. We found that the amount of attentional modulation varied linearly with the degree of competition left unresolved by bottom-up processes, such that attentional modulation was greatest when neural competition was little influenced by bottom-up mechanisms and smallest when competition was strongly influenced by bottom-up mechanisms. These findings suggest that the strength of attentional modulation in the visual system is constrained by the degree to which competitive interactions have been resolved by bottom-up processes related to the segmentation of scenes into candidate objects. PMID:21228167
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Diffusion fMRI detects white-matter dysfunction in mice with acute optic neuritis
Lin, Tsen-Hsuan; Spees, William M.; Chiang, Chia-Wen; Trinkaus, Kathryn; Cross, Anne H.; Song, Sheng-Kwei
2014-01-01
Optic neuritis is a frequent and early symptom of multiple sclerosis (MS). Conventional magnetic resonance (MR) techniques provide means to assess multiple MS-related pathologies, including axonal injury, demyelination, and inflammation. A method to directly and non-invasively probe white-matter function could further elucidate the interplay of underlying pathologies and functional impairments. Previously, we demonstrated a significant 27% activation-associated decrease in the apparent diffusion coefficient of water perpendicular to the axonal fibers (ADC⊥) in normal C57BL/6 mouse optic nerve with visual stimulation using diffusion fMRI. Here we apply this approach to explore the relationship between visual acuity, optic nerve pathology, and diffusion fMRI in the experimental autoimmune encephalomyelitis (EAE) mouse model of optic neuritis. Visual stimulation produced a significant 25% (vs. baseline) ADC⊥ decrease in sham EAE optic nerves, while only a 7% (vs. baseline) ADC⊥ decrease was seen in EAE mice with acute optic neuritis. The reduced activation-associated ADC⊥ response correlated with post-MRI immunohistochemistry determined pathologies (including inflammation, demyelination, and axonal injury). The negative correlation between activation-associated ADC⊥ response and visual acuity was also found when pooling EAE-affected and sham groups under our experimental criteria. Results suggest that reduction in diffusion fMRI directly reflects impaired axonal-activation in EAE mice with optic neuritis. Diffusion fMRI holds promise for directly gauging in vivo white-matter dysfunction or therapeutic responses in MS patients. PMID:24632420
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Neural mechanisms of limb position estimation in the primate brain.
Shi, Ying; Buneo, Christopher A
2011-01-01
Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.
47 CFR 80.293 - Check bearings by authorized ship personnel.
Code of Federal Regulations, 2010 CFR
2010-10-01
....293 Section 80.293 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... comparison of simultaneous visual and radio direction finder bearings. At least one comparison bearing must... visual bearing relative to the ship's heading and the difference between the visual and radio direction...
Pereira, G M; Heins, B J; Endres, M I
2018-03-01
The objective of this study was to validate an ear-tag accelerometer sensor (CowManager SensOor, Agis Automatisering BV, Harmelen, the Netherlands) using direct visual observations in a grazing dairy herd. Lactating crossbred cows (n = 24) were used for this experiment at the University of Minnesota West Central Research and Outreach Center grazing dairy (Morris, MN) during the summer of 2016. A single trained observer recorded behavior every minute for 6 h for each cow (24 cows × 6 h = 144 h of observation total). Direct visual observation was compared with sensor data during August and September 2016. The sensor detected and identified ear and head movements, and through algorithms the sensor classified each minute as one of the following behaviors: rumination, eating, not active, active, and high active. A 2-sided t-test was conducted with PROC TTEST of SAS (SAS Institute Inc., Cary, NC) to compare the percentage of time each cow's behavior was recorded by direct visual observation and sensor data. For total recorded time, the percentage of time of direct visual observation compared with sensor data was 17.9 and 19.1% for rumination, 52.8 and 51.9% for eating, 17.4 and 11.9% for not active, and 7.9 and 21.1% for active. Pearson correlations (PROC CORR of SAS) were used to evaluate associations between direct visual observations and sensor data. Furthermore, concordance correlation coefficient (CCC), bias correction factors, location shift, and scale shift (epiR package of R version 3.3.1; R Foundation for Statistical Computing, Vienna, Austria) were calculated to provide a measure of accuracy and precision. Correlations between visual observations for all 4 behaviors were highly to weakly correlated (rumination: r = 0.72, CCC = 0.71; eating: r = 0.88, CCC = 0.88; not active: r = 0.65, CCC = 0.52; and active: r = 0.20, CCC = 0.19) compared with sensor data. The results suggest that the sensor accurately monitors rumination and eating behavior of grazing dairy cattle. However, active behaviors may be more difficult for the sensor to record than others. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Absence of Visual Input Results in the Disruption of Grid Cell Firing in the Mouse.
Chen, Guifen; Manson, Daniel; Cacucci, Francesca; Wills, Thomas Joseph
2016-09-12
Grid cells are spatially modulated neurons within the medial entorhinal cortex whose firing fields are arranged at the vertices of tessellating equilateral triangles [1]. The exquisite periodicity of their firing has led to the suggestion that they represent a path integration signal, tracking the organism's position by integrating speed and direction of movement [2-10]. External sensory inputs are required to reset any errors that the path integrator would inevitably accumulate. Here we probe the nature of the external sensory inputs required to sustain grid firing, by recording grid cells as mice explore familiar environments in complete darkness. The absence of visual cues results in a significant disruption of grid cell firing patterns, even when the quality of the directional information provided by head direction cells is largely preserved. Darkness alters the expression of velocity signaling within the entorhinal cortex, with changes evident in grid cell firing rate and the local field potential theta frequency. Short-term (<1.5 s) spike timing relationships between grid cell pairs are preserved in the dark, indicating that network patterns of excitatory and inhibitory coupling between grid cells exist independently of visual input and of spatially periodic firing. However, we find no evidence of preserved hexagonal symmetry in the spatial firing of single grid cells at comparable short timescales. Taken together, these results demonstrate that visual input is required to sustain grid cell periodicity and stability in mice and suggest that grid cells in mice cannot perform accurate path integration in the absence of reliable visual cues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Vision: a moving hill for spatial updating on the fly.
Stanford, Terrence R
2015-02-02
A recent study reveals a dynamic neural map that provides a continuous representation of remembered visual stimulus locations with respect to constantly changing gaze. This finding suggests a new mechanistic framework for understanding the spatiotemporal dynamics of goal-directed action. Copyright © 2015 Elsevier Ltd. All rights reserved.
Arshad, Q; Siddiqui, S; Ramachandran, S; Goga, U; Bonsu, A; Patel, M; Roberts, R E; Nigmatullina, Y; Malhotra, P; Bronstein, A M
2015-12-17
Right hemisphere dominance for visuo-spatial attention is characteristically observed in most right-handed individuals. This dominance has been attributed to both an anatomically larger right fronto-parietal network and the existence of asymmetric parietal interhemispheric connections. Previously it has been demonstrated that interhemispheric conflict, which induces left hemisphere inhibition, results in the modulation of both (i) the excitability of the early visual cortex (V1) and (ii) the brainstem-mediated vestibular-ocular reflex (VOR) via top-down control mechanisms. However to date, it remains unknown whether the degree of an individual's right hemisphere dominance for visuospatial function can influence, (i) the baseline excitability of the visual cortex and (ii) the extent to which the right hemisphere can exert top-down modulation. We directly tested this by correlating line bisection error (or pseudoneglect), taken as a measure of right hemisphere dominance, with both (i) visual cortical excitability measured using phosphene perception elicited via single-pulse occipital trans-cranial magnetic stimulation (TMS) and (ii) the degree of trans-cranial direct current stimulation (tDCS)-mediated VOR suppression, following left hemisphere inhibition. We found that those individuals with greater right hemisphere dominance had a less excitable early visual cortex at baseline and demonstrated a greater degree of vestibular nystagmus suppression following left hemisphere cathodal tDCS. To conclude, our results provide the first demonstration that individual differences in right hemisphere dominance can directly predict both the baseline excitability of low-level brain structures and the degree of top-down modulation exerted over them. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.
1998-06-01
An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.
37 CFR 202.3 - Registration of copyright.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Class VA: Works of the visual arts. This class includes all published and unpublished pictorial, graphic... permission and under the direction of the Visual Arts Division, the application may be submitted... published photographs after consultation and with the permission and under the direction of the Visual Arts...
Bhirde, Ashwin A; Sousa, Alioscka A; Patel, Vyomesh; Azari, Afrouz A; Gutkind, J Silvio; Leapman, Richard D; Rusling, James F
2009-01-01
Aims To image the distribution of drug molecules attached to single-wall carbon nanotubes (SWNTs). Materials & methods Herein we report the use of scanning transmission electron microscopy (STEM) for atomic scale visualization and quantitation of single platinum-based drug molecules attached to SWNTs designed for targeted drug delivery. Fourier transform infrared spectroscopy and energy-dispersive x-ray spectroscopy were used for characterization of the SWNT drug conjugates. Results Z-contrast STEM imaging enabled visualization of the first-line anticancer drug cisplatin on the nanotubes at single molecule level. The identity and presence of cisplatin on the nanotubes was confirmed using energy-dispersive x-ray spectroscopy and Fourier transform infrared spectroscopy. STEM tomography was also used to provide additional insights concerning the nanotube conjugates. Finally, our observations provide a rationale for exploring the use of SWNT bioconjugates to selectively target and kill squamous cancer cells. Conclusion Z-contrast STEM imaging provides a means for direct visualization of heavy metal containing molecules (i.e., cisplatin) attached to surfaces of carbon SWNTs along with distribution and quantitation. PMID:19839812
Accessibility limits recall from visual working memory.
Rajsic, Jason; Swan, Garrett; Wilson, Daryl E; Pratt, Jay
2017-09-01
In this article, we demonstrate limitations of accessibility of information in visual working memory (VWM). Recently, cued-recall has been used to estimate the fidelity of information in VWM, where the feature of a cued object is reproduced from memory (Bays, Catalao, & Husain, 2009; Wilken & Ma, 2004; Zhang & Luck, 2008). Response error in these tasks has been largely studied with respect to failures of encoding and maintenance; however, the retrieval operations used in these tasks remain poorly understood. By varying the number and type of object features provided as a cue in a visual delayed-estimation paradigm, we directly assess the nature of retrieval errors in delayed estimation from VWM. Our results demonstrate that providing additional object features in a single cue reliably improves recall, largely by reducing swap, or misbinding, responses. In addition, performance simulations using the binding pool model (Swan & Wyble, 2014) were able to mimic this pattern of performance across a large span of parameter combinations, demonstrating that the binding pool provides a possible mechanism underlying this pattern of results that is not merely a symptom of one particular parametrization. We conclude that accessing visual working memory is a noisy process, and can lead to errors over and above those of encoding and maintenance limitations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.
Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea
2017-05-01
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.
In situ AFM imaging of apolipoprotein A-I directly derived from plasma HDL.
Gan, Chaoye; Wang, Zhexuan; Chen, Yong
2017-04-01
The major apolipoproteins of plasma lipoproteins play vital roles in the structural integrity and physiological functions of lipoproteins. More than ten structural models of apolipoprotein A-I (apoA-I), the major apolipoprotein of high-density lipoprotein (HDL), have been developed successively. In these models, apoA-I was supposed to organize in a ring-shaped form. To date, however, there is no direct evidence under physiological condition. Here, atomic force microscopy (AFM) was used to in situ visualize the organization of apoA-I, which was exposed via depletion of the lipid component of plasma HDL pre-immobilized on functionalized mica sheets. For the first time, the ring-shaped coarse structure and three detailed structures (crescent-shaped, gapped "O"-shaped, and parentheses-shaped structures, respectively) of apoA-I in plasma HDL, which have the ability of binding scavenger receptors, were directly observed and quantitatively measured by AFM. The three detailed structures probably represent the different extents to which the lipid component of HDL was depleted. Data on lipid depletion of HDL may provide clues to understand lipid insertion of HDL. These data provide important information for the understanding of the structure/maturation of plasma HDL. Moreover, they suggest a powerful method for directly visualizing the major apolipoproteins of plasma lipoproteins or the protein component of lipoprotein-like lipid-protein complexes. Copyright © 2017 Elsevier B.V. All rights reserved.
Direction of attentional focus in biofeedback treatment for /r/ misarticulation.
McAllister Byun, Tara; Swartz, Michelle T; Halpin, Peter F; Szeredi, Daniel; Maas, Edwin
2016-07-01
Maintaining an external direction of focus during practice is reported to facilitate acquisition of non-speech motor skills, but it is not known whether these findings also apply to treatment for speech errors. This question has particular relevance for treatment incorporating visual biofeedback, where clinician cueing can direct the learner's attention either internally (i.e., to the movements of the articulators) or externally (i.e., to the visual biofeedback display). This study addressed two objectives. First, it aimed to use single-subject experimental methods to collect additional evidence regarding the efficacy of visual-acoustic biofeedback treatment for children with /r/ misarticulation. Second, it compared the efficacy of this biofeedback intervention under two cueing conditions. In the external focus (EF) condition, participants' attention was directed exclusively to the external biofeedback display. In the internal focus (IF) condition, participants viewed a biofeedback display, but they also received articulatory cues encouraging an internal direction of attentional focus. Nine school-aged children were pseudo-randomly assigned to receive either IF or EF cues during 8 weeks of visual-acoustic biofeedback intervention. Accuracy in /r/ production at the word level was probed in three to five pre-treatment baseline sessions and in three post-treatment maintenance sessions. Outcomes were assessed using visual inspection and calculation of effect sizes for individual treatment trajectories. In addition, a mixed logistic model was used to examine across-subjects effects including phase (pre/post-treatment), /r/ variant (treated/untreated), and focus cue condition (internal/external). Six out of nine participants showed sustained improvement on at least one treated /r/ variant; these six participants were evenly divided across EF and IF treatment groups. Regression results indicated that /r/ productions were significantly more likely to be rated accurate post- than pre-treatment. Internal versus external direction of focus cues was not a significant predictor of accuracy, nor did it interact significantly with other predictors. The results are consistent with previous literature reporting that visual-acoustic biofeedback can produce measurable treatment gains in children who have not responded to previous intervention. These findings are also in keeping with previous research suggesting that biofeedback may be sufficient to establish an external attentional focus, independent of verbal cues provided. The finding that explicit articulator placement cues were not necessary for progress in treatment has implications for intervention practices for speech-sound disorders in children. © 2016 Royal College of Speech and Language Therapists.
Pellicano, Antonello; Koch, Iring; Binkofski, Ferdinand
2017-09-01
An increasing number of studies have shown a close link between perception and action, which is supposed to be responsible for the automatic activation of actions compatible with objects' properties, such as the orientation of their graspable parts. It has been observed that left and right hand responses to objects (e.g., cups) are faster and more accurate if the handle orientation corresponds to the response location than when it does not. Two alternative explanations have been proposed for this handle-to-hand correspondence effect : location coding and affordance activation. The aim of the present study was to provide disambiguating evidence on the origin of this effect by employing object sets for which the visually salient portion was separated from, and opposite to the graspable 1, and vice versa. Seven experiments were conducted employing both single objects and object pairs as visual stimuli to enhance the contextual information about objects' graspability and usability. Notwithstanding these manipulations intended to favor affordance activation, results fully supported the location-coding account displaying significant Simon-like effects that involved the orientation of the visually salient portion of the object stimulus and the location of the response. Crucially, we provided evidence of Simon-like effects based on higher-level cognitive, iconic representations of action directions rather than based on lower-level spatial coding of the pure position of protruding portions of the visual stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2013-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954
Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M
2012-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.
Stroupe, Kevin T; Stelmack, Joan A; Tang, X Charlene; Wei, Yongliang; Sayers, Scott; Reda, Domenic J; Kwon, Ellen; Massof, Robert W
2018-05-01
Examining costs and consequences of different low-vision (LV) programs provides important information about resources needed to expand treatment options efficiently. To examine the costs and consequences of LV rehabilitation or basic LV services. The US Department of Veterans Affairs (VA) Low Vision Intervention Trial (LOVIT) II was conducted from September 27, 2010, to July 31, 2014, at 9 VA facilities and included 323 veterans with macular diseases and a best-corrected distance visual acuity of 20/50 to 20/200. Veterans were randomized to receive basic LV services that provided LV devices without therapy, or LV rehabilitation that added a therapist to LV services who provided instruction and homework on using LV devices, eccentric viewing, and environmental modification. We compared costs and consequences between these groups. Low-vision devices without therapy and LV devices with therapy. Costs of providing basic LV services or LV rehabilitation were assessed. We measured consequences as changes in functional visual ability from baseline to follow-up 4 months after randomization using the VA Low Vision Visual Functioning Questionnaire. Visual ability was measured in dimensionless log odds units (logits). Of 323 randomized patients, the mean (SD) age was 80 (10.5) years, 314 (97.2%) were men, and 292 (90.4%) were white. One hundred sixty (49.5%) received basic LV services and 163 (50.1%) received LV rehabilitation. The mean (SD) total direct health care costs per patient were similar between patients who were randomized to receive basic LV services ($1662 [$671]) or LV rehabilitation ($1788 [$864]) (basic LV services, $126 lower; 95% CI, $299 lower to $35 higher; P = .15). However, basic LV services required less time and had lower transportation costs. Patients receiving LV rehabilitation had greater improvements in overall visual ability, reading ability, visual information processing, and visual motor skill scores.
Preparatory attention in visual cortex.
Battistoni, Elisa; Stein, Timo; Peelen, Marius V
2017-05-01
Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions. © 2017 New York Academy of Sciences.
Breaking cover: neural responses to slow and fast camouflage-breaking motion.
Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei
2015-08-22
Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.
Breaking cover: neural responses to slow and fast camouflage-breaking motion
Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M.; McLoughlin, Niall; Wang, Wei
2015-01-01
Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. PMID:26269500
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments. PMID:27019593
Tactile-Foot Stimulation Can Assist the Navigation of People with Visual Impairment.
Velázquez, Ramiro; Pissaloux, Edwige; Lay-Ekuakille, Aimé
2015-01-01
Background. Tactile interfaces that stimulate the plantar surface with vibrations could represent a step forward toward the development of wearable, inconspicuous, unobtrusive, and inexpensive assistive devices for people with visual impairments. Objective. To study how people understand information through their feet and to maximize the capabilities of tactile-foot perception for assisting human navigation. Methods. Based on the physiology of the plantar surface, three prototypes of electronic tactile interfaces for the foot have been developed. With important technological improvements between them, all three prototypes essentially consist of a set of vibrating actuators embedded in a foam shoe-insole. Perceptual experiments involving direction recognition and real-time navigation in space were conducted with a total of 60 voluntary subjects. Results. The developed prototypes demonstrated that they are capable of transmitting tactile information that is easy and fast to understand. Average direction recognition rates were 76%, 88.3%, and 94.2% for subjects wearing the first, second, and third prototype, respectively. Exhibiting significant advances in tactile-foot stimulation, the third prototype was evaluated in navigation tasks. Results show that subjects were capable of following directional instructions useful for navigating spaces. Conclusion. Footwear providing tactile stimulation can be considered for assisting the navigation of people with visual impairments.
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Invertebrate neurobiology: visual direction of arm movements in an octopus.
Niven, Jeremy E
2011-03-22
An operant task in which octopuses learn to locate food by a visual cue in a three-choice maze shows that they are capable of integrating visual and mechanosensory information to direct their arm movements to a goal. Copyright © 2011 Elsevier Ltd. All rights reserved.
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Boshkovikj, Veselin; Fluke, Christopher J; Crawford, Russell J; Ivanova, Elena P
2014-02-28
There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a 'creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The 'Dynamics' and 'nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.
NASA Astrophysics Data System (ADS)
Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.
2014-02-01
There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a `creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The `Dynamics' and `nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.
Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.
2014-01-01
There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a ‘creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The ‘Dynamics' and ‘nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices. PMID:24577105
Moonlight Makes Owls More Chatty
Penteriani, Vincenzo; Delgado, María del Mar; Campioni, Letizia; Lourenço, Rui
2010-01-01
Background Lunar cycles seem to affect many of the rhythms, temporal patterns and behaviors of living things on Earth. Ambient light is known to affect visual communication in animals, with the conspicuousness of visual signals being largely determined by the light available for reflection by the sender. Although most previous studies in this context have focused on diurnal light, moonlight should not be neglected from the perspective of visual communication among nocturnal species. We recently discovered that eagle owls Bubo bubo communicate with conspecifics using a patch of white throat plumage that is repeatedly exposed during each call and is only visible during vocal displays. Methodology/Principal Findings Here we provide evidence that this species uses moonlight to increase the conspicuousness of this visual signal during call displays. We found that call displays are directly influenced by the amount of moonlight, with silent nights being more frequent during periods with no-moonlight than moonlight. Furthermore, high numbers of calling bouts were more frequent at moonlight. Finally, call posts were located on higher positions on moonlit nights. Conclusions/Significance Our results support the idea that moon phase affects the visual signaling behavior of this species, and provide a starting point for examination of this method of communication by nocturnal species. PMID:20098700
Testing of visual field with virtual reality goggles in manual and visual grasp modes.
Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
2014-01-01
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.
Latychevskaia, Tatiana; Wicki, Flavio; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner
2016-09-14
Visualizing individual charges confined to molecules and observing their dynamics with high spatial resolution is a challenge for advancing various fields in science, ranging from mesoscopic physics to electron transfer events in biological molecules. We show here that the high sensitivity of low-energy electrons to local electric fields can be employed to directly visualize individual charged adsorbates and to study their behavior in a quantitative way. This makes electron holography a unique probing tool for directly visualizing charge distributions with a sensitivity of a fraction of an elementary charge. Moreover, spatial resolution in the nanometer range and fast data acquisition inherent to lens-less low-energy electron holography allows for direct visual inspection of charge transfer processes.
Ciani, Cesare; Doty, Stephen B.; Fritton, Susannah P.
2009-01-01
Bone is a composite porous material with two functional levels of porosity: the vascular porosity that surrounds blood vessels and the lacunar-canalicular porosity that surrounds the osteocytes. Both the vascular porosity and lacunar-canalicular porosity are directly involved in interstitial fluid flow, thought to play an important role in bone’s maintenance. Because of the small dimensions of the lacunar-canalicular porosity, interstitial fluid space has been difficult to visualize and quantify. We report a new staining protocol that is reliable and easily reproducible, using fluorescein isothiocyanate (FITC) as a probe visualized by confocal microscopy. Reconstructed FITC-stained cross sections enable effective visualization of bone microstructure and microporosities. This new staining process can be used to analyze interstitial fluid space, providing high-resolution quantification of the vascular pores and the lacunar-canalicular network of cortical and cancellous bone. PMID:19442607
Flow, affect and visual creativity.
Cseh, Genevieve M; Phillips, Louise H; Pearson, David G
2015-01-01
Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.
Three-dimensional computer visualization of forensic pathology data.
March, Jack; Schofield, Damian; Evison, Martin; Woodford, Noel
2004-03-01
Despite a decade of use in US courtrooms, it is only recently that forensic computer animations have become an increasingly important form of communication in legal spheres within the United Kingdom. Aims Research at the University of Nottingham has been influential in the critical investigation of forensic computer graphics reconstruction methodologies and techniques and in raising the profile of this novel form of data visualization within the United Kingdom. The case study presented demonstrates research undertaken by Aims Research and the Department of Forensic Pathology at the University of Sheffield, which aims to apply, evaluate, and develop novel 3-dimensional computer graphics (CG) visualization and virtual reality (VR) techniques in the presentation and investigation of forensic information concerning the human body. The inclusion of such visualizations within other CG or VR environments may ultimately provide the potential for alternative exploratory directions, processes, and results within forensic pathology investigations.
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
2017-01-01
The superior colliculus (SC) receives direct input from the retina and integrates it with information about sound, touch, and state of the animal that is relayed from other parts of the brain to initiate specific behavioral outcomes. The superficial SC layers (sSC) contain cells that respond to visual stimuli, whereas the deep SC layers (dSC) contain cells that also respond to auditory and somatosensory stimuli. Here, we used a large-scale silicon probe recording system to examine the visual response properties of SC cells of head-fixed and alert male mice. We found cells with diverse response properties including: (1) orientation/direction-selective (OS/DS) cells with a firing rate that is suppressed by drifting sinusoidal gratings (negative OS/DS cells); (2) suppressed-by-contrast cells; (3) cells with complex-like spatial summation nonlinearity; and (4) cells with Y-like spatial summation nonlinearity. We also found specific response properties that are enriched in different depths of the SC. The sSC is enriched with cells with small RFs, high evoked firing rates (FRs), and sustained temporal responses, whereas the dSC is enriched with the negative OS/DS cells and with cells with large RFs, low evoked FRs, and transient temporal responses. Locomotion modulates the activity of the SC cells both additively and multiplicatively and changes the preferred spatial frequency of some SC cells. These results provide the first description of the negative OS/DS cells and demonstrate that the SC segregates cells with different response properties and that the behavioral state of a mouse affects SC activity. SIGNIFICANCE STATEMENT The superior colliculus (SC) receives visual input from the retina in its superficial layers (sSC) and induces eye/head-orientating movements and innate defensive responses in its deeper layers (dSC). Despite their importance, very little is known about the visual response properties of dSC neurons. Using high-density electrode recordings and novel model-based analysis, we found several novel visual response properties of the SC cells, including encoding of a cell's preferred orientation or direction by suppression of the firing rate. The sSC and the dSC are enriched with cells with different visual response properties. Locomotion modulates the cells in the SC. These findings contribute to our understanding of how the SC processes visual inputs, a critical step in comprehending visually guided behaviors. PMID:28760858
Cognitive programs: software for attention's executive
Tsotsos, John K.; Kruijne, Wouter
2014-01-01
What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430
Stephen, Ian D; Sturman, Daniel; Stevenson, Richard J; Mond, Jonathan; Brooks, Kevin R
2018-01-01
Body size misperception-the belief that one is larger or smaller than reality-affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers' level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception-a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse-while others do not.
Sturman, Daniel; Stevenson, Richard J.; Mond, Jonathan; Brooks, Kevin R.
2018-01-01
Body size misperception–the belief that one is larger or smaller than reality–affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers’ level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception–a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse–while others do not. PMID:29385137
Higashiyama, A
1992-03-01
Three experiments investigated anisotropic perception of visual angle outdoors. In Experiment 1, scales for vertical and horizontal visual angles ranging from 20 degrees to 80 degrees were constructed with the method of angle production (in which the subject reproduced a visual angle with a protractor) and the method of distance production (in which the subject produced a visual angle by adjusting viewing distance). In Experiment 2, scales for vertical and horizontal visual angles of 5 degrees-30 degrees were constructed with the method of angle production and were compared with scales for orientation in the frontal plane. In Experiment 3, vertical and horizontal visual angles of 3 degrees-80 degrees were judged with the method of verbal estimation. The main results of the experiments were as follows: (1) The obtained angles for visual angle are described by a quadratic equation, theta' = a + b theta + c theta 2 (where theta is the visual angle; theta', the obtained angle; a, b, and c, constants). (2) The linear coefficient b is larger than unity and is steeper for vertical direction than for horizontal direction. (3) The quadratic coefficient c is generally smaller than zero and is negatively larger for vertical direction than for horizontal direction. And (4) the obtained angle for visual angle is larger than that for orientation. From these results, it was possible to predict the horizontal-vertical illusion, over-constancy of size, and the moon illusion.
Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.
Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan
2018-05-23
Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison. Copyright © 2018 the authors 0270-6474/18/384996-12$15.00/0.
Common Neural Representations for Visually Guided Reorientation and Spatial Imagery
Vass, Lindsay K.; Epstein, Russell A.
2017-01-01
Abstract Spatial knowledge about an environment can be cued from memory by perception of a visual scene during active navigation or by imagination of the relationships between nonvisible landmarks, such as when providing directions. It is not known whether these different ways of accessing spatial knowledge elicit the same representations in the brain. To address this issue, we scanned participants with fMRI, while they performed a judgment of relative direction (JRD) task that required them to retrieve real-world spatial relationships in response to either pictorial or verbal cues. Multivoxel pattern analyses revealed several brain regions that exhibited representations that were independent of the cues to access spatial memory. Specifically, entorhinal cortex in the medial temporal lobe and the retrosplenial complex (RSC) in the medial parietal lobe coded for the heading assumed on a particular trial, whereas the parahippocampal place area (PPA) contained information about the starting location of the JRD. These results demonstrate the existence of spatial representations in RSC, ERC, and PPA that are common to visually guided navigation and spatial imagery. PMID:26759482
Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola
2016-05-01
Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical cues are critical in orienting infants' visual attention towards a peripheral region of space that is congruent with the number's relative position on a left-to-right oriented representational continuum. This finding provides the first direct evidence that, in humans, the association between numbers and oriented spatial codes occurs before the acquisition of symbols or exposure to formal education, suggesting that the number line is not merely a product of human invention. © 2015 John Wiley & Sons Ltd.
Eye movements in interception with delayed visual feedback.
Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli
2018-07-01
The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.
Silvanto, Juha; Cattaneo, Zaira
2010-05-01
Cortical areas involved in sensory analysis are also believed to be involved in short-term storage of that sensory information. Here we investigated whether transcranial magnetic stimulation (TMS) can reveal the content of visual short-term memory (VSTM) by bringing this information to visual awareness. Subjects were presented with two random-dot displays (moving either to the left or to the right) and they were required to maintain one of these in VSTM. In Experiment 1, TMS was applied over the motion-selective area V5/MT+ above phosphene threshold during the maintenance phase. The reported phosphene contained motion features of the memory item, when the phosphene spatially overlapped with memory item. Specifically, phosphene motion was enhanced when the memory item moved in the same direction as the subjects' V5/MT+ baseline phosphene, whereas it was reduced when the motion direction of the memory item was incongruent with that of the baseline V5/MT+ phosphene. There was no effect on phosphene reports when there was no spatial overlap between the phosphene and the memory item. In Experiment 2, VSTM maintenance did not influence the appearance of phosphenes induced from the lateral occipital region. These interactions between VSTM maintenance and phosphene appearance demonstrate that activity in V5/MT+ reflects the motion qualities of items maintained in VSTM. Furthermore, these results also demonstrate that information in VSTM can modulate the pattern of visual activation reaching awareness, providing evidence for the view that overlapping neuronal populations are involved in conscious visual perception and VSTM. 2010. Published by Elsevier Inc.
Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark
Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less
Effects of walker gender and observer gender on biological motion walking direction discrimination.
Yang, Xiaoying; Cai, Peng; Jiang, Yi
2014-09-01
The ability to recognize the movements of other biological entities, such as whether a person is walking toward you, is essential for survival and social interaction. Previous studies have shown that the visual system is particularly sensitive to approaching biological motion. In this study, we examined whether the gender of walkers and observers influenced the walking direction discrimination of approaching point-light walkers in fine granularity. The observers were presented a walker who walked in different directions and were asked to quickly judge the walking direction (left or right). The results showed that the observers demonstrated worse direction discrimination when the walker was depicted as male than when the walker was depicted as female, probably because the observers tended to perceive the male walkers as walking straight ahead. Intriguingly, male observers performed better than female observers at judging the walking directions of female walkers but not those of male walkers, a result indicating perceptual advantage with evolutionary significance. These findings provide strong evidence that the gender of walkers and observers modulates biological motion perception and that an adaptive perceptual mechanism exists in the visual system to facilitate the survival of social organisms. © 2014 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Dineen, Brendan; Gilbert, Clare E; Rabiu, Mansur; Kyari, Fatima; Mahdi, Abdull M; Abubakar, Tafida; Ezelum, Christian C; Gabriel, Entekume; Elhassan , Elizabeth; Abiose, Adenike; Faal, Hannah; Jiya, Jonathan Y; Ozemela, Chinenyem P; Lee, Pak Sang; Gudlavalleti, Murthy VS
2008-01-01
Background Despite having the largest population in Africa, Nigeria has no accurate population based data to plan and evaluate eye care services. A national survey was undertaken to estimate the prevalence and determine the major causes of blindness and low vision. This paper presents the detailed methodology used during the survey. Methods A nationally representative sample of persons aged 40 years and above was selected. Children aged 10–15 years and individuals aged <10 or 16–39 years with visual impairment were also included if they lived in households with an eligible adult. All participants had their height, weight, and blood pressure measured followed by assessment of presenting visual acuity, refractokeratomery, A-scan ultrasonography, visual fields and best corrected visual acuity. Anterior and posterior segments of each eye were examined with a torch and direct ophthalmoscope. Participants with visual acuity of < = 6/12 in one or both eyes underwent detailed examination including applanation tonometry, dilated slit lamp biomicroscopy, lens grading and fundus photography. All those who had undergone cataract surgery were refracted and best corrected vision recorded. Causes of visual impairment by eye and for the individual were determined using a clinical algorithm recommended by the World Health Organization. In addition, 1 in 7 adults also underwent a complete work up as described for those with vision < = 6/12 for constructing a normative data base for Nigerians. Discussion The field work for the study was completed in 30 months over the period 2005–2007 and covered 305 clusters across the entire country. Concurrently persons 40+ years were examined to form a normative data base. Analysis of the data is currently underway. Conclusion The methodology used was robust and adequate to provide estimates on the prevalence and causes of blindness in Nigeria. The survey would also provide information on barriers to accessing services, quality of life of visually impaired individuals and also provide normative data for Nigerian eyes. PMID:18808712
The Mechanisms of Manual Therapy in the Treatment of Musculoskeletal Pain: A Comprehensive Model
Bialosky, Joel E; Bishop, Mark D; Price, Don D; Robinson, Michael E; George, Steven Z
2009-01-01
Prior studies suggest manual therapy (MT) as effective in the treatment of musculoskeletal pain; however, the mechanisms through which MT exerts its effects are not established. In this paper we present a comprehensive model to direct future studies in MT. This model provides visualization of potential individual mechanisms of MT that the current literature suggests as pertinent and provides a framework for the consideration of the potential interaction between these individual mechanisms. Specifically, this model suggests that a mechanical force from MT initiates a cascade of neurophysiological responses from the peripheral and central nervous system which are then responsible for the clinical outcomes. This model provides clear direction so that future studies may provide appropriate methodology to account for multiple potential pertinent mechanisms. PMID:19027342
Effects of continuous visual feedback during sitting balance training in chronic stroke survivors.
Pellegrino, Laura; Giannoni, Psiche; Marinelli, Lucio; Casadio, Maura
2017-10-16
Postural control deficits are common in stroke survivors and often the rehabilitation programs include balance training based on visual feedback to improve the control of body position or of the voluntary shift of body weight in space. In the present work, a group of chronic stroke survivors, while sitting on a force plate, exercised the ability to control their Center of Pressure with a training based on continuous visual feedback. The goal of this study was to test if and to what extent chronic stroke survivors were able to learn the task and transfer the learned ability to a condition without visual feedback and to directions and displacement amplitudes different from those experienced during training. Eleven chronic stroke survivors (5 Male - 6 Female, age: 59.72 ± 12.84 years) participated in this study. Subjects were seated on a stool positioned on top of a custom-built force platform. Their Center of Pressure positions were mapped to the coordinate of a cursor on a computer monitor. During training, the cursor position was always displayed and the subjects were to reach targets by shifting their Center of Pressure by moving their trunk. Pre and post-training subjects were required to reach without visual feedback of the cursor the training targets as well as other targets positioned in different directions and displacement amplitudes. During training, most stroke survivors were able to perform the required task and to improve their performance in terms of duration, smoothness, and movement extent, although not in terms of movement direction. However, when we removed the visual feedback, most of them had no improvement with respect to their pre-training performance. This study suggests that postural training based exclusively on continuous visual feedback can provide limited benefits for stroke survivors, if administered alone. However, the positive gains observed during training justify the integration of this technology-based protocol in a well-structured and personalized physiotherapy training, where the combination of the two approaches may lead to functional recovery.
A framework for visualization of battlefield network behavior
NASA Astrophysics Data System (ADS)
Perzov, Yury; Yurcik, William
2006-05-01
An extensible network simulation application was developed to study wireless battlefield communications. The application monitors node mobility and depicts broadcast and unicast traffic as expanding rings and directed links. The network simulation was specially designed to support fault injection to show the impact of air strikes on disabling nodes. The application takes standard ns-2 trace files as an input and provides for performance data output in different graphical forms (histograms and x/y plots). Network visualization via animation of simulation output can be saved in AVI format that may serve as a basis for a real-time battlefield awareness system.
DeSantis, Diana
2014-06-01
Amblyopia refers to unilateral or bilateral reduction in best corrected visual acuity, not directly attributed to structural abnormality of the eye or posterior visual pathways. Early detection of amblyopia is crucial to obtaining the best response to treatment. Amblyopia responds best to treatment in the first few years of life. In the past several years a series of studies undertaken by the Pediatric Eye Disease Investigator Group (PEDIG) have been designed to evaluate traditional methods for treating amblyopia and provide evidence on which to base treatment decisions. This article summarizes and discusses the findings of the PEDIG studies to date. Copyright © 2014 Elsevier Inc. All rights reserved.
Direct Imaging of Long-Range Exciton Transport in Quantum Dot Superlattices by Ultrafast Microscopy.
Yoon, Seog Joon; Guo, Zhi; Dos Santos Claro, Paula C; Shevchenko, Elena V; Huang, Libai
2016-07-26
Long-range charge and exciton transport in quantum dot (QD) solids is a crucial challenge in utilizing QDs for optoelectronic applications. Here, we present a direct visualization of exciton diffusion in highly ordered CdSe QDs superlattices by mapping exciton population using ultrafast transient absorption microscopy. A temporal resolution of ∼200 fs and a spatial precision of ∼50 nm of this technique provide a direct assessment of the upper limit for exciton transport in QD solids. An exciton diffusion length of ∼125 nm has been visualized in the 3 ns experimental time window and an exciton diffusion coefficient of (2.5 ± 0.2) × 10(-2) cm(2) s(-1) has been measured for superlattices constructed from 3.6 nm CdSe QDs with center-to-center distance of 6.7 nm. The measured exciton diffusion constant is in good agreement with Förster resonance energy transfer theory. We have found that exciton diffusion is greatly enhanced in the superlattices over the disordered films with an order of magnitude higher diffusion coefficient, pointing toward the role of disorder in limiting transport. This study provides important understandings on energy transport mechanisms in both the spatial and temporal domains in QD solids.
Students' Usability Evaluation of a Web-Based Tutorial Program for College Biology Problem Solving
ERIC Educational Resources Information Center
Kim, H. S.; Prevost, L.; Lemons, P. P.
2015-01-01
The understanding of core concepts and processes of science in solving problems is important to successful learning in biology. We have designed and developed a Web-based, self-directed tutorial program, "SOLVEIT," that provides various scaffolds (e.g., prompts, expert models, visual guidance) to help college students enhance their…
How-to-Do-It: A Simulation of the Blood Type Test.
ERIC Educational Resources Information Center
Sharp, John D., Sr.; Smailes, Deborah L.
1989-01-01
Explains an activity that allows students to visualize antigen-antibody type reactions and learn about antibodies and antigens without performing blood typing tests. Provides directions for students and a comparison chart of a blood typing simulation with procedure which is based on the reactions of certain ionic solutions when mixed. (RT)
NASA Technical Reports Server (NTRS)
Lackner, J. R.; Levine, M. S.
1979-01-01
Human experiments are carried out which support the observation of Goodwin (1973) and Goodwin et al. (1972) that vibration of skeletal muscles can elicit illusory limb motion. These experiments extend the class of possible myesthetic illusions by showing that vibration of the appropriate muscles can produce illusory body motion in nearly any desired direction. Such illusory changes in posture occur only when visual information about body orientation is absent; these changes in apparent posture are sometimes accompanied by a slow-phase nystagmus that compensates for the direction of apparent body motion. During illusory body motion a stationary target light that is fixated will appear to move with the body at the same apparent velocity. However, this pattern of apparent body motion and conjoint visual - defined as propriogyral illusion - is suppressed if the subject is in a fully illuminated environment providing cues about true body orientation. Persuasive evidence is thus provided for the contribution of both muscle afferent and touch-pressure information to the supraspinal mechanisms that determine apparent orientation on the basis of ongoing patterns of interoceptive and exteroceptive activity.
High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL.
Stone, John E; Messmer, Peter; Sisneros, Robert; Schulten, Klaus
2016-05-01
Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications.
High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL
Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus
2016-01-01
Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137
Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn
2017-07-01
Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Matsui, Teppei; Ohki, Kenichi
2013-01-01
Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Illusory motion reversal is caused by rivalry, not by perceptual snapshots of the visual field.
Kline, Keith; Holcombe, Alex O; Eagleman, David M
2004-10-01
In stroboscopic conditions--such as motion pictures--rotating objects may appear to rotate in the reverse direction due to under-sampling (aliasing). A seemingly similar phenomenon occurs in constant sunlight, which has been taken as evidence that the visual system processes discrete "snapshots" of the outside world. But if snapshots are indeed taken of the visual field, then when a rotating drum appears to transiently reverse direction, its mirror image should always appeared to reverse direction simultaneously. Contrary to this hypothesis, we found that when observers watched a rotating drum and its mirror image, almost all illusory motion reversals occurred for only one image at a time. This result indicates that the motion reversal illusion cannot be explained by snapshots of the visual field. The same result is found when the two images are presented within one visual hemifield, further ruling out the possibility that discrete sampling of the visual field occurs separately in each hemisphere. The frequency distribution of illusory reversal durations approximates a gamma distribution, suggesting perceptual rivalry as a better explanation for illusory motion reversal. After adaptation of motion detectors coding for the correct direction, the activity of motion-sensitive neurons coding for motion in the reverse direction may intermittently become dominant and drive the perception of motion.
Semantic bifurcated importance field visualization
NASA Astrophysics Data System (ADS)
Lindahl, Eric; Petrov, Plamen
2007-04-01
While there are many good ways to map sensual reality to two dimensional displays, mapping non-physical and possibilistic information can be challenging. The advent of faster-than-real-time systems allow the predictive and possibilistic exploration of important factors that can affect the decision maker. Visualizing a compressed picture of the past and possible factors can assist the decision maker summarizing information in a cognitive based model thereby reducing clutter and perhaps related decision times. Our proposed semantic bifurcated importance field visualization uses saccadic eye motion models to partition the display into a possibilistic and sensed data vertically and spatial and semantic data horizontally. Saccadic eye movement precedes and prepares decision makers before nearly every directed action. Cognitive models for saccadic eye movement show that people prefer lateral to vertical saccadic movement. Studies have suggested that saccades may be coupled to momentary problem solving strategies. Also, the central 1.5 degrees of the visual field represents 100 times greater resolution that then peripheral field so concentrating factors can reduce unnecessary saccades. By packing information according to saccadic models, we can relate important decision factors reduce factor dimensionality and present the dense summary dimensions of semantic and importance. Inter and intra ballistics of the SBIFV provide important clues on how semantic packing assists in decision making. Future directions of SBIFV are to make the visualization reactive and conformal to saccades specializing targets to ballistics, such as dynamically filtering and highlighting verbal targets for left saccades and spatial targets for right saccades.
Joint representation of translational and rotational components of optic flow in parietal cortex
Sunkara, Adhira; DeAngelis, Gregory C.; Angelaki, Dora E.
2016-01-01
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals. PMID:27095846
Kanowski, M; Voges, J; Buentjen, L; Stadler, J; Heinze, H-J; Tempelmann, C
2014-09-01
The morphology of the human thalamus shows high interindividual variability. Therefore, direct visualization of landmarks within the thalamus is essential for an improved definition of electrode positions for deep brain stimulation. The aim of this study was to provide anatomic detail in the thalamus by using inversion recovery TSE imaging at 7T. The MR imaging protocol was optimized on 1 healthy subject to segment thalamic nuclei from one another. Final images, acquired with 0.5(2)-mm2 in-plane resolution and 3-mm section thickness, were compared with stereotactic brain atlases to assign visualized details to known anatomy. The robustness of the visualization of thalamic nuclei was assessed with 4 healthy subjects at lower image resolution. Thalamic subfields were successfully delineated in the dorsal aspect of the lateral thalamus. T1-weighting was essential. MR images had an appearance very similar to that of myelin-stained sections seen in brain atlases. Visualized intrathalamic structures were, among others, the lamella medialis, the external medullary lamina, the reticulatum thalami, the nucleus centre médian, the boundary between the nuclei dorso-oralis internus and externus, and the boundary between the nuclei dorso-oralis internus and zentrolateralis intermedius internus. Inversion recovery-prepared TSE imaging at 7T has a high potential to reveal fine anatomic detail in the thalamus, which may be helpful in enhancing the planning of stereotactic neurosurgery in the future. © 2014 by American Journal of Neuroradiology.
Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.
Seymour, Kiley J; Clifford, Colin W G
2012-05-01
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
TU-G-BRA-02: Can We Extract Lung Function Directly From 4D-CT Without Deformable Image Registration?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kipritidis, J; Woodruff, H; Counter, W
Purpose: Dynamic CT ventilation imaging (CT-VI) visualizes air volume changes in the lung by evaluating breathing-induced lung motion using deformable image registration (DIR). Dynamic CT-VI could enable functionally adaptive lung cancer radiation therapy, but its sensitivity to DIR parameters poses challenges for validation. We hypothesize that a direct metric using CT parameters derived from Hounsfield units (HU) alone can provide similar ventilation images without DIR. We compare the accuracy of Direct and Dynamic CT-VIs versus positron emission tomography (PET) images of inhaled {sup 68}Ga-labelled nanoparticles (‘Galligas’). Methods: 25 patients with lung cancer underwent Galligas 4D-PET/CT scans prior to radiation therapy.more » For each patient we produced three CT- VIs. (i) Our novel method, Direct CT-VI, models blood-gas exchange as the product of air and tissue density at each lung voxel based on time-averaged 4D-CT HU values. Dynamic CT-VIs were produced by evaluating: (ii) regional HU changes, and (iii) regional volume changes between the exhale and inhale 4D-CT phase images using a validated B-spline DIR method. We assessed the accuracy of each CT-VI by computing the voxel-wise Spearman correlation with free-breathing Galligas PET, and also performed a visual analysis. Results: Surprisingly, Direct CT-VIs exhibited better global correlation with Galligas PET than either of the dynamic CT-VIs. The (mean ± SD) correlations were (0.55 ± 0.16), (0.41 ± 0.22) and (0.29 ± 0.27) for Direct, Dynamic HU-based and Dynamic volume-based CT-VIs respectively. Visual comparison of Direct CT-VI to PET demonstrated similarity for emphysema defects and ventral-to-dorsal gradients, but inability to identify decreased ventilation distal to tumor-obstruction. Conclusion: Our data supports the hypothesis that Direct CT-VIs are as accurate as Dynamic CT-VIs in terms of global correlation with Galligas PET. Visual analysis, however, demonstrated that different CT-VI algorithms might have varying accuracy depending on the underlying cause of ventilation abnormality. This research was supported by a National Health and Medical Research Council (NHMRC) Australia Fellowship, an Cancer Institute New South Wales Early Career Fellowship 13-ECF-1/15 and NHMRC scholarship APP1038399. No commercial funding was received for this work.« less
Flight directions of passerine migrants in daylight and darkness: A radar and direct visual study
NASA Technical Reports Server (NTRS)
Gauthreaux, S. A., Jr.
1972-01-01
The application of radar and visual techniques to determine the migratory habits of passerine birds during daylight and darkness is discussed. The effects of wind on the direction of migration are examined. Scatter diagrams of daytime and nocturnal migration track directions correlated with wind direction are presented. It is concluded that migratory birds will fly at altitudes where wind direction and migratory direction are nearly the same. The effects of cloud cover and solar obscuration are considered negligible.
NASA Astrophysics Data System (ADS)
Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.
2017-12-01
Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
NASA Astrophysics Data System (ADS)
Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.
2003-12-01
Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be used directly by classroom teachers.
Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H
2016-02-24
Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention effect. The data will be used to design a large-scale randomised controlled trial to evaluate fully the Visual Rehabilitation Officer intervention. A rigorous evaluation of Rehabilitation Officer input is vital to direct a future low vision rehabilitation strategy and to help direct government resources. The trial was registered with ( ISRCTN44807874 ) on 9 March 2015.
Retinal Origin of Direction Selectivity in the Superior Colliculus
Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua
2017-01-01
Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394
Visualization of migration of human cortical neurons generated from induced pluripotent stem cells.
Bamba, Yohei; Kanemura, Yonehiro; Okano, Hideyuki; Yamasaki, Mami
2017-09-01
Neuronal migration is considered a key process in human brain development. However, direct observation of migrating human cortical neurons in the fetal brain is accompanied by ethical concerns and is a major obstacle in investigating human cortical neuronal migration. We established a novel system that enables direct visualization of migrating cortical neurons generated from human induced pluripotent stem cells (hiPSCs). We observed the migration of cortical neurons generated from hiPSCs derived from a control and from a patient with lissencephaly. Our system needs no viable brain tissue, which is usually used in slice culture. Migratory behavior of human cortical neuron can be observed more easily and more vividly by its fluorescence and glial scaffold than that by earlier methods. Our in vitro experimental system provides a new platform for investigating development of the human central nervous system and brain malformation. Copyright © 2017 Elsevier B.V. All rights reserved.
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Spatial updating in area LIP is independent of saccade direction.
Heiser, Laura M; Colby, Carol L
2006-05-01
We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.
Implicit and Explicit Representations of Hand Position in Tool Use
Rand, Miya K.; Heuer, Herbert
2013-01-01
Understanding the interactions of visual and proprioceptive information in tool use is important as it is the basis for learning of the tool's kinematic transformation and thus skilled performance. This study investigated how the CNS combines seen cursor positions and felt hand positions under a visuo-motor rotation paradigm. Young and older adult participants performed aiming movements on a digitizer while looking at rotated visual feedback on a monitor. After each movement, they judged either the proprioceptively sensed hand direction or the visually sensed cursor direction. We identified asymmetric mutual biases with a strong visual dominance. Furthermore, we found a number of differences between explicit and implicit judgments of hand directions. The explicit judgments had considerably larger variability than the implicit judgments. The bias toward the cursor direction for the explicit judgments was about twice as strong as for the implicit judgments. The individual biases of explicit and implicit judgments were uncorrelated. Biases of these judgments exhibited opposite sequential effects. Moreover, age-related changes were also different between these judgments. The judgment variability was decreased and the bias toward the cursor direction was increased with increasing age only for the explicit judgments. These results indicate distinct explicit and implicit neural representations of hand direction, similar to the notion of distinct visual systems. PMID:23894307
Testing of Visual Field with Virtual Reality Goggles in Manual and Visual Grasp Modes
Wroblewski, Dariusz; Francis, Brian A.; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
2014-01-01
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4–6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode. PMID:25050326
JOHN, KEVIN K.; JENSEN, JAKOB D.; KING, ANDY J.; RATCLIFF, CHELSEA L.; GROSSMAN, DOUGLAS
2017-01-01
Skin self-examination (SSE) consists of routinely checking the body for atypical moles that might be cancerous. Identifying atypical moles is a visual task; thus, SSE training materials utilize pattern-focused visuals to cultivate this skill. Despite widespread use, researchers have yet to explicate how pattern-focused visuals cultivate visual skill. Using eye tracking to capture the visual scanpaths of a sample of laypersons (N = 92), the current study employed a 2 (pattern: ABCDE vs. ugly duckling sign [UDS]) × 2 (presentation: photorealistic images vs. illustrations) factorial design to assess whether and how pattern-focused visuals can increase layperson accuracy in identifying atypical moles. Overall, illustrations resulted in greater sensitivity, while photos resulted in greater specificity. The UDS × photorealistic condition showed greatest specificity. For those in the photo condition with high self-efficacy, UDS increased specificity directly. For those in the photo condition with self-efficacy levels at the mean or lower, there was a conditional indirect effect such that these individuals spent a larger amount of their viewing time observing the atypical moles, and time on target was positively related to specificity. Illustrations provided significant gains in specificity for those with low-to-moderate self-efficacy by increasing total fixation time on the atypical moles. Findings suggest that maximizing visual processing efficiency could enhance existing SSE training techniques. PMID:28759333
Parabolic aircraft solidification experiments
NASA Technical Reports Server (NTRS)
Workman, Gary L. (Principal Investigator); Smith, Guy A.; OBrien, Susan
1996-01-01
A number of solidification experiments have been utilized throughout the Materials Processing in Space Program to provide an experimental environment which minimizes variables in solidification experiments. Two techniques of interest are directional solidification and isothermal casting. Because of the wide-spread use of these experimental techniques in space-based research, several MSAD experiments have been manifested for space flight. In addition to the microstructural analysis for interpretation of the experimental results from previous work with parabolic flights, it has become apparent that a better understanding of the phenomena occurring during solidification can be better understood if direct visualization of the solidification interface were possible. Our university has performed in several experimental studies such as this in recent years. The most recent was in visualizing the effect of convective flow phenomena on the KC-135 and prior to that were several successive contracts to perform directional solidification and isothermal casting experiments on the KC-135. Included in this work was the modification and utilization of the Convective Flow Analyzer (CFA), the Aircraft Isothermal Casting Furnace (ICF), and the Three-Zone Directional Solidification Furnace. These studies have contributed heavily to the mission of the Microgravity Science and Applications' Materials Science Program.
Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K
2015-01-22
The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Learning semantic and visual similarity for endomicroscopy video retrieval.
Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2012-06-01
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.
Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery
NASA Astrophysics Data System (ADS)
Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.
2009-05-01
In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.
Markers of preparatory attention predict visual short-term memory performance.
Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G
2011-05-01
Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.
Zeitoun, Jack H.; Kim, Hyungtae
2017-01-01
Binocular mechanisms for visual processing are thought to enhance spatial acuity by combining matched input from the two eyes. Studies in the primary visual cortex of carnivores and primates have confirmed that eye-specific neuronal response properties are largely matched. In recent years, the mouse has emerged as a prominent model for binocular visual processing, yet little is known about the spatial frequency tuning of binocular responses in mouse visual cortex. Using calcium imaging in awake mice of both sexes, we show that the spatial frequency preference of cortical responses to the contralateral eye is ∼35% higher than responses to the ipsilateral eye. Furthermore, we find that neurons in binocular visual cortex that respond only to the contralateral eye are tuned to higher spatial frequencies. Binocular neurons that are well matched in spatial frequency preference are also matched in orientation preference. In contrast, we observe that binocularly mismatched cells are more mismatched in orientation tuning. Furthermore, we find that contralateral responses are more direction-selective than ipsilateral responses and are strongly biased to the cardinal directions. The contralateral bias of high spatial frequency tuning was found in both awake and anesthetized recordings. The distinct properties of contralateral cortical responses may reflect the functional segregation of direction-selective, high spatial frequency-preferring neurons in earlier stages of the central visual pathway. Moreover, these results suggest that the development of binocularity and visual acuity may engage distinct circuits in the mouse visual system. SIGNIFICANCE STATEMENT Seeing through two eyes is thought to improve visual acuity by enhancing sensitivity to fine edges. Using calcium imaging of cellular responses in awake mice, we find surprising asymmetries in the spatial processing of eye-specific visual input in binocular primary visual cortex. The contralateral visual pathway is tuned to higher spatial frequencies than the ipsilateral pathway. At the highest spatial frequencies, the contralateral pathway strongly prefers to respond to visual stimuli along the cardinal (horizontal and vertical) axes. These results suggest that monocular, and not binocular, mechanisms set the limit of spatial acuity in mice. Furthermore, they suggest that the development of visual acuity and binocularity in mice involves different circuits. PMID:28924011
2008-07-09
movement and to ensure head-centered movement during rotation. The subject’s gaze was directed to a black visual field inside the device to provide a...vertical nystagmus . (NAMI-1079 NASA Order No. R-93). Pensacola, FL: Naval Aerospace Medical Institute. Homick, J. L., Kohl, R. L., Reschke, M. F
Y0: An innovative tool for spatial data analysis
NASA Astrophysics Data System (ADS)
Wilson, Jeremy C.
1993-08-01
This paper describes an advanced analysis and visualization tool, called Y0 (pronounced ``Why not?!''), that has been developed to directly support the scientific process for earth and space science research. Y0 aids the scientific research process by enabling the user to formulate algorithms and models within an integrated environment, and then interactively explore the solution space with the aid of appropriate visualizations. Y0 has been designed to provide strong support for both quantitative analysis and rich visualization. The user's algorithm or model is defined in terms of algebraic formulas in cells on worksheets, in a similar fashion to spreadsheet programs. Y0 is specifically designed to provide the data types and rich function set necessary for effective analysis and manipulation of remote sensing data. This includes various types of arrays, geometric objects, and objects for representing geographic coordinate system mappings. Visualization of results is tailored to the needs of remote sensing, with straightforward methods of composing, comparing, and animating imagery and graphical information, with reference to geographical coordinate systems. Y0 is based on advanced object-oriented technology. It is implemented in C++ for use in Unix environments, with a user interface based on the X window system. Y0 has been delivered under contract to Unidata, a group which provides data and software support to atmospheric researches in universities affiliated with UCAR. This paper will explore the key concepts in Y0, describe its utility for remote sensing analysis and visualization, and will give a specific example of its application to the problem of measuring glacier flow rates from Landsat imagery.
Unaware Processing of Tools in the Neural System for Object-Directed Action Representation.
Tettamanti, Marco; Conca, Francesca; Falini, Andrea; Perani, Daniela
2017-11-01
The hypothesis that the brain constitutively encodes observed manipulable objects for the actions they afford is still debated. Yet, crucial evidence demonstrating that, even in the absence of perceptual awareness, the mere visual appearance of a manipulable object triggers a visuomotor coding in the action representation system including the premotor cortex, has hitherto not been provided. In this fMRI study, we instantiated reliable unaware visual perception conditions by means of continuous flash suppression, and we tested in 24 healthy human participants (13 females) whether the visuomotor object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices is activated even under subliminal perceptual conditions. We found consistent activation in the target visuomotor cortices, both with and without perceptual awareness, specifically for pictures of manipulable versus non-manipulable objects. By means of a multivariate searchlight analysis, we also found that the brain activation patterns in this visuomotor network enabled the decoding of manipulable versus non-manipulable object picture processing, both with and without awareness. These findings demonstrate the intimate neural coupling between visual perception and motor representation that underlies manipulable object processing: manipulable object stimuli specifically engage the visuomotor object-directed action representation system, in a constitutive manner that is independent from perceptual awareness. This perceptuo-motor coupling endows the brain with an efficient mechanism for monitoring and planning reactions to external stimuli in the absence of awareness. SIGNIFICANCE STATEMENT Our brain constantly encodes the visual information that hits the retina, leading to a stimulus-specific activation of sensory and semantic representations, even for objects that we do not consciously perceive. Do these unconscious representations encompass the motor programming of actions that could be accomplished congruently with the objects' functions? In this fMRI study, we instantiated unaware visual perception conditions, by dynamically suppressing the visibility of manipulable object pictures with mondrian masks. Despite escaping conscious perception, manipulable objects activated an object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices. This demonstrates that visuomotor encoding occurs independently of conscious object perception. Copyright © 2017 the authors 0270-6474/17/3710712-13$15.00/0.
Minimum viewing angle for visually guided ground speed control in bumblebees.
Baird, Emily; Kornfeldt, Torill; Dacke, Marie
2010-05-01
To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.
Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; North, Chris
2012-10-14
With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less
Fast visual prediction and slow optimization of preferred walking speed.
O'Connor, Shawn M; Donelan, J Maxwell
2012-05-01
People prefer walking speeds that minimize energetic cost. This may be accomplished by directly sensing metabolic rate and adapting gait to minimize it, but only slowly due to the compounded effects of sensing delays and iterative convergence. Visual and other sensory information is available more rapidly and could help predict which gait changes reduce energetic cost, but only approximately because it relies on prior experience and an indirect means to achieve economy. We used virtual reality to manipulate visually presented speed while 10 healthy subjects freely walked on a self-paced treadmill to test whether the nervous system beneficially combines these two mechanisms. Rather than manipulating the speed of visual flow directly, we coupled it to the walking speed selected by the subject and then manipulated the ratio between these two speeds. We then quantified the dynamics of walking speed adjustments in response to perturbations of the visual speed. For step changes in visual speed, subjects responded with rapid speed adjustments (lasting <2 s) and in a direction opposite to the perturbation and consistent with returning the visually presented speed toward their preferred walking speed, when visual speed was suddenly twice (one-half) the walking speed, subjects decreased (increased) their speed. Subjects did not maintain the new speed but instead gradually returned toward the speed preferred before the perturbation (lasting >300 s). The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that seeks to minimize energetic cost.
Grohar: Automated Visualization of Genome-Scale Metabolic Models and Their Pathways.
Moškon, Miha; Zimic, Nikolaj; Mraz, Miha
2018-05-01
Genome-scale metabolic models (GEMs) have become a powerful tool for the investigation of the entire metabolism of the organism in silico. These models are, however, often extremely hard to reconstruct and also difficult to apply to the selected problem. Visualization of the GEM allows us to easier comprehend the model, to perform its graphical analysis, to find and correct the faulty relations, to identify the parts of the system with a designated function, etc. Even though several approaches for the automatic visualization of GEMs have been proposed, metabolic maps are still manually drawn or at least require large amount of manual curation. We present Grohar, a computational tool for automatic identification and visualization of GEM (sub)networks and their metabolic fluxes. These (sub)networks can be specified directly by listing the metabolites of interest or indirectly by providing reference metabolic pathways from different sources, such as KEGG, SBML, or Matlab file. These pathways are identified within the GEM using three different pathway alignment algorithms. Grohar also supports the visualization of the model adjustments (e.g., activation or inhibition of metabolic reactions) after perturbations are induced.
A taxonomy of visualization tasks for the analysis of biological pathway data.
Murray, Paul; McGee, Fintan; Forbes, Angus G
2017-02-15
Understanding complicated networks of interactions and chemical components is essential to solving contemporary problems in modern biology, especially in domains such as cancer and systems research. In these domains, biological pathway data is used to represent chains of interactions that occur within a given biological process. Visual representations can help researchers understand, interact with, and reason about these complex pathways in a number of ways. At the same time, these datasets offer unique challenges for visualization, due to their complexity and heterogeneity. Here, we present taxonomy of tasks that are regularly performed by researchers who work with biological pathway data. The generation of these tasks was done in conjunction with interviews with several domain experts in biology. These tasks require further classification than is provided by existing taxonomies. We also examine existing visualization techniques that support each task, and we discuss gaps in the existing visualization space revealed by our taxonomy. Our taxonomy is designed to support the development and design of future biological pathway visualization applications. We conclude by suggesting future research directions based on our taxonomy and motivated by the comments received by our domain experts.
Visualizing Volcanic Clouds in the Atmosphere and Their Impact on Air Traffic.
Gunther, Tobias; Schulze, Maik; Friederici, Anke; Theisel, Holger
2016-01-01
Volcanic eruptions are not only hazardous in the direct vicinity of a volcano, but they also affect the climate and air travel for great distances. This article sheds light on the Grímsvötn, Puyehue-Cordón Caulle, and Nabro eruptions in 2011. The authors study the agreement of the complementary satellite data, reconstruct sulfate aerosol and volcanic ash clouds, visualize endangered flight routes, minimize occlusion in particle trajectory visualizations, and focus on the main pathways of Nabro's sulfate aerosol into the stratosphere. The results here were developed for the 2014 IEEE Scientific Visualization Contest, which centers around the fusion of multiple satellite data modalities to reconstruct and assess the movement of volcanic ash and sulfate aerosol emissions. Using data from three volcanic eruptions that occurred in the span of approximately three weeks, the authors study the agreement of the complementary satellite data, reconstruct sulfate aerosol and volcanic ash clouds, visualize endangered flight routes, minimize occlusion in particle trajectory visualizations, and focus on the main pathways of sulfate aerosol into the stratosphere. This video provides animations of the reconstructed ash clouds. https://youtu.be/D9DvJ5AvZAs.
Does constraining memory maintenance reduce visual search efficiency?
Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R
2018-03-01
We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.
Seeing the hand while reaching speeds up on-line responses to a sudden change in target position
Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre
2009-01-01
Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
Category-Selectivity in Human Visual Cortex Follows Cortical Topology: A Grouped icEEG Study
Conner, Christopher Richard; Whaley, Meagan Lee; Baboyan, Vatche George; Tandon, Nitin
2016-01-01
Neuroimaging studies suggest that category-selective regions in higher-order visual cortex are topologically organized around specific anatomical landmarks: the mid-fusiform sulcus (MFS) in the ventral temporal cortex (VTC) and lateral occipital sulcus (LOS) in the lateral occipital cortex (LOC). To derive precise structure-function maps from direct neural signals, we collected intracranial EEG (icEEG) recordings in a large human cohort (n = 26) undergoing implantation of subdural electrodes. A surface-based approach to grouped icEEG analysis was used to overcome challenges from sparse electrode coverage within subjects and variable cortical anatomy across subjects. The topology of category-selectivity in bilateral VTC and LOC was assessed for five classes of visual stimuli—faces, animate non-face (animals/body-parts), places, tools, and words—using correlational and linear mixed effects analyses. In the LOC, selectivity for living (faces and animate non-face) and non-living (places and tools) classes was arranged in a ventral-to-dorsal axis along the LOS. In the VTC, selectivity for living and non-living stimuli was arranged in a latero-medial axis along the MFS. Written word-selectivity was reliably localized to the intersection of the left MFS and the occipito-temporal sulcus. These findings provide direct electrophysiological evidence for topological information structuring of functional representations within higher-order visual cortex. PMID:27272936
Padula, William V; Subramanian, Prem; Spurling, April; Jenness, Jonathan
2015-01-01
Following a neurologic event such as traumatic brain injury (TBI), cerebrovascular accident (CVA), and chronic neurological conditions including Parkinson's disease, multiple sclerosis, and cerebral palsy a shift in the visual midline (egocenter) can directly affect posture, balance and spatial orientation. As a consequence, this increases the risk of fall (RoF) and injury that imposes a major financial burden on the public health system. To determine if there is a statistically significant change in balance with the intervention of yoked prisms to reduce the risk of fall in subjects with neurological impairments. Ambulation of thirty-six subjects was evaluated on a pressure sensitive mat before and after intervention with yoked prisms. Changes in gait and balance were analyzed in the anterior-posterior (AP) and medial-lateral (ML) axes during ambulation. T-tests for each measure comparing the difference-of-differences to a zero change at baseline returned statistically significant reductions in both AP (p < 0.0001; 95% CI: 1.368- 2.976) and ML (p = 0.0002; 95% CI: 1.472- 4.173) imbalances using specifically directed yoked prisms to correct the visual midline deviation. These findings demonstrate that yoked prisms have the potential to provide a cost-effective means to restore the visual midline thereby improving balance, reduce RoF and subsequent injury.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Doerschner, K.; Boyaci, H.; Maloney, L. T.
2007-01-01
We investigated limits on the human visual system’s ability to discount directional variation in complex lights field when estimating Lambertian surface color. Directional variation in the light field was represented in the frequency domain using spherical harmonics. The bidirectional reflectance distribution function of a Lambertian surface acts as a low-pass filter on directional variation in the light field. Consequently, the visual system needs to discount only the low-pass component of the incident light corresponding to the first nine terms of a spherical harmonics expansion (Basri & Jacobs, 2001; Ramamoorthi & Hanrahan, 2001) to accurately estimate surface color. We test experimentally whether the visual system discounts directional variation in the light field up to this physical limit. Our results are consistent with the claim that the visual system can compensate for all of the complexity in the light field that affects the appearance of Lambertian surfaces. PMID:18053846
Ravens, Corvus corax, follow gaze direction of humans around obstacles.
Bugnyar, Thomas; Stöwe, Mareike; Heinrich, Bernd
2004-01-01
The ability to follow gaze (i.e. head and eye direction) has recently been shown for social mammals, particularly primates. In most studies, individuals could use gaze direction as a behavioural cue without understanding that the view of others may be different from their own. Here, we show that hand-raised ravens not only visually co-orient with the look-ups of a human experimenter but also reposition themselves to follow the experimenter's gaze around a visual barrier. Birds were capable of visual co-orientation already as fledglings but consistently tracked gaze direction behind obstacles not before six months of age. These results raise the possibility that sub-adult and adult ravens can project a line of sight for the other person into the distance. To what extent ravens may attribute mental significance to the visual behaviour of others is discussed. PMID:15306330
Decoding ensemble activity from neurophysiological recordings in the temporal cortex.
Kreiman, Gabriel
2011-01-01
We study subjects with pharmacologically intractable epilepsy who undergo semi-chronic implantation of electrodes for clinical purposes. We record physiological activity from tens to more than one hundred electrodes implanted in different parts of neocortex. These recordings provide higher spatial and temporal resolution than non-invasive measures of human brain activity. Here we discuss our efforts to develop hardware and algorithms to interact with the human brain by decoding ensemble activity in single trials. We focus our discussion on decoding visual information during a variety of visual object recognition tasks but the same technologies and algorithms can also be directly applied to other cognitive phenomena.
Luft, Joseph R.; Wolfley, Jennifer R.; Snell, Edward H.
2011-01-01
Observations of crystallization experiments are classified as specific outcomes and integrated through a phase diagram to visualize solubility and thereby direct subsequent experiments. Specific examples are taken from our high-throughput crystallization laboratory which provided a broad scope of data from 20 million crystallization experiments on 12,500 different biological macromolecules. The methods and rationale are broadly and generally applicable in any crystallization laboratory. Through a combination of incomplete factorial sampling of crystallization cocktails, standard outcome classifications, visualization of outcomes as they relate chemically and application of a simple phase diagram approach we demonstrate how to logically design subsequent crystallization experiments. PMID:21643490
Discussion on the 3D visualizing of 1:200 000 geological map
NASA Astrophysics Data System (ADS)
Wang, Xiaopeng
2018-01-01
Using United States National Aeronautics and Space Administration Shuttle Radar Topography Mission (SRTM) terrain data as digital elevation model (DEM), overlap scanned 1:200 000 scale geological map, program using Direct 3D of Microsoft with C# computer language, the author realized the three-dimensional visualization of the standard division geological map. User can inspect the regional geology content with arbitrary angle, rotating, roaming, and can examining the strata synthetical histogram, map section and legend at any moment. This will provide an intuitionistic analyzing tool for the geological practitioner to do structural analysis with the assistant of landform, dispose field exploration route etc.
Evidence for light perception in a bioluminescent organ
Tong, Deyan; Rozas, Natalia S.; Oakley, Todd H.; Mitchell, Jane; Colley, Nansi J.; McFall-Ngai, Margaret J.
2009-01-01
Here we show that bioluminescent organs of the squid Euprymna scolopes possess the molecular, biochemical, and physiological capability for light detection. Transcriptome analyses revealed expression of genes encoding key visual transduction proteins in light-organ tissues, including the same isoform of opsin that occurs in the retina. Electroretinograms demonstrated that the organ responds physiologically to light, and immunocytochemistry experiments localized multiple proteins of visual transduction cascades to tissues housing light-producing bacterial symbionts. These data provide evidence that the light-organ tissues harboring the symbionts serve as extraocular photoreceptors, with the potential to perceive directly the bioluminescence produced by their bacterial partners. PMID:19509343
Online Analysis Enhances Use of NASA Earth Science Data
NASA Technical Reports Server (NTRS)
Acker, James G.; Leptoukh, Gregory
2007-01-01
Giovanni, the Goddard Earth Sciences Data and Information Services Center (GES DISC) Interactive Online Visualization and Analysis Infrastructure, has provided researchers with advanced capabilities to perform data exploration and analysis with observational data from NASA Earth observation satellites. In the past 5-10 years, examining geophysical events and processes with remote-sensing data required a multistep process of data discovery, data acquisition, data management, and ultimately data analysis. Giovanni accelerates this process by enabling basic visualization and analysis directly on the World Wide Web. In the last two years, Giovanni has added new data acquisition functions and expanded analysis options to increase its usefulness to the Earth science research community.
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Choice reaction time to visual motion during prolonged rotary motion in airline pilots
NASA Technical Reports Server (NTRS)
Stewart, J. D.; Clark, B.
1975-01-01
Thirteen airline pilots were studied to determine the effect of preceding rotary accelerations on the choice reaction time to the horizontal acceleration of a vertical line on a cathode-ray tube. On each trial, one of three levels of rotary and visual acceleration was presented with the rotary stimulus preceding the visual by one of seven periods. The two accelerations were always equal and were presented in the same or opposite directions. The reaction time was found to increase with increases in the time the rotary acceleration preceded the visual acceleration, and to decrease with increased levels of visual and rotary acceleration. The reaction time was found to be shorter when the accelerations were in the same direction than when they were in opposite directions. These results suggest that these findings are a special case of a general effect that the authors have termed 'gyrovisual modulation'.
NASA Astrophysics Data System (ADS)
Fleming, Christine P.; Quan, Kara J.; Rollins, Andrew M.
2010-07-01
Radiofrequency ablation (RFA) is the standard of care to cure many cardiac arrhythmias. Epicardial ablation for the treatment of ventricular tachycardia has limited success rates due in part to the presence of epicardial fat, which prevents proper rf energy delivery, inadequate contact of ablation catheter with tissue, and increased likelihood of complications with energy delivery in close proximity to coronary vessels. A method to directly visualize the epicardial surface during RFA could potentially provide feedback to reduce complications and titrate rf energy dose by detecting critical structures, assessing probe contact, and confirming energy delivery by visualizing lesion formation. Currently, there is no technology available for direct visualization of the heart surface during epicardial RFA therapy. We demonstrate that optical coherence tomography (OCT) imaging has the potential to fill this unmet need. Spectral domain OCT at 1310 nm is employed to image the epicardial surface of freshly excised swine hearts using a microscope integrated bench-top scanner and a forward imaging catheter probe. OCT image features are observed that clearly distinguish untreated myocardium, ablation lesions, epicardial fat, and coronary vessels, and assess tissue contact with catheter-based imaging. These results support the potential for real-time guidance of epicardial RFA therapy using OCT imaging.
Looking for an accident: glider pilots' visual management and potentially dangerous final turns.
Jarvis, Steve; Harris, Don
2007-06-01
Accidents caused by spinning from low turns continue to kill glider pilots despite the introduction of specific exercises aimed at increasing pilot awareness and recognition of this issue. In-cockpit video cameras were used to analyze flying accuracy and log the areas of visual interest of 36 qualified glider pilots performing final turns in a training glider. Pilots were found to divide their attention between four areas of interest: the view directly ahead; the landing area (right); the airspeed indicator; and an area between the direct ahead view and the landing area. The mean fixation rate was 85 shifts per minute. Significant correlations were found between over-use of rudder and a lack of attention to the view ahead, as well as between the overall fixation rate and poorer coordination in the turn. The results provide some evidence that a relationship exists between pilots' visual management and making turns in a potentially dangerous manner. Pilots who monitor the view ahead for reasonable periods during the final turn while not allowing their scan to become over-busy are those who are most likely to prevent a potential spin.
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
A mobile phone system to find crosswalks for visually impaired pedestrians
Shen, Huiying; Chan, Kee-Yip; Coughlan, James; Brabyn, John
2010-01-01
Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian’s travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information. The system consists of a standard mobile phone with built-in camera that uses computer vision algorithms to detect any crosswalk visible in the camera’s field of view; audio feedback from the phone then helps the user align him/herself to it. Our prototype implementation on a Nokia mobile phone runs in about one second per image, and is intended for eventual use in a mobile phone system that will aid blind and visually impaired pedestrians in navigating traffic intersections. PMID:20411035
Functionally distinct amygdala subregions identified using DTI and high-resolution fMRI
Balderston, Nicholas L.; Schultz, Douglas H.; Hopkins, Lauren
2015-01-01
Although the amygdala is often directly linked with fear and emotion, amygdala neurons are activated by a wide variety of emotional and non-emotional stimuli. Different subregions within the amygdala may be engaged preferentially by different aspects of emotional and non-emotional tasks. To test this hypothesis, we measured and compared the effects of novelty and fear on amygdala activity. We used high-resolution blood oxygenation level-dependent (BOLD) imaging and streamline tractography to subdivide the amygdala into three distinct functional subunits. We identified a laterobasal subregion connected with the visual cortex that responds generally to visual stimuli, a non-projecting region that responds to salient visual stimuli, and a centromedial subregion connected with the diencephalon that responds only when a visual stimulus predicts an aversive outcome. We provide anatomical and functional support for a model of amygdala function where information enters through the laterobasal subregion, is processed by intrinsic circuits in the interspersed tissue, and is then passed to the centromedial subregion, where activation leads to behavioral output. PMID:25969533
PathFinder: reconstruction and dynamic visualization of metabolic pathways.
Goesmann, Alexander; Haubrock, Martin; Meyer, Folker; Kalinowski, Jörn; Giegerich, Robert
2002-01-01
Beyond methods for a gene-wise annotation and analysis of sequenced genomes new automated methods for functional analysis on a higher level are needed. The identification of realized metabolic pathways provides valuable information on gene expression and regulation. Detection of incomplete pathways helps to improve a constantly evolving genome annotation or discover alternative biochemical pathways. To utilize automated genome analysis on the level of metabolic pathways new methods for the dynamic representation and visualization of pathways are needed. PathFinder is a tool for the dynamic visualization of metabolic pathways based on annotation data. Pathways are represented as directed acyclic graphs, graph layout algorithms accomplish the dynamic drawing and visualization of the metabolic maps. A more detailed analysis of the input data on the level of biochemical pathways helps to identify genes and detect improper parts of annotations. As an Relational Database Management System (RDBMS) based internet application PathFinder reads a list of EC-numbers or a given annotation in EMBL- or Genbank-format and dynamically generates pathway graphs.
Plugin free remote visualization in the browser
NASA Astrophysics Data System (ADS)
Tamm, Georg; Slusallek, Philipp
2015-01-01
Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.
Visual accumulation tube for size analysis of sands
Colby, B.C.; Christensen, R.P.
1956-01-01
The visual-accumulation-tube method was developed primarily for making size analyses of the sand fractions of suspended-sediment and bed-material samples. Because the fundamental property governing the motion of a sediment particle in a fluid is believed to be its fall velocity. the analysis is designed to determine the fall-velocity-frequency distribution of the individual particles of the sample. The analysis is based on a stratified sedimentation system in which the sample is introduced at the top of a transparent settling tube containing distilled water. The procedure involves the direct visual tracing of the height of sediment accumulation in a contracted section at the bottom of the tube. A pen records the height on a moving chart. The method is simple and fast, provides a continuous and permanent record, gives highly reproducible results, and accurately determines the fall-velocity characteristics of the sample. The apparatus, procedure, results, and accuracy of the visual-accumulation-tube method for determining the sedimentation-size distribution of sands are presented in this paper.
Remote visual analysis of large turbulence databases at multiple scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulido, Jesus; Livescu, Daniel; Kanov, Kalin
The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less
Visual cues and perceived reachability.
Gabbard, Carl; Ammar, Diala
2005-12-01
A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom, also known as the whole-body explanation. The present study examined the role of visual information in the form of binocular and monocular cues in perceived reachability. Right-handed participants judged the reachability of visual targets at midline with both eyes open, dominant eye occluded, and the non-dominant eye covered. Results indicated that participants were relatively accurate with condition responses not being significantly different in regard to total error. Analysis of the direction of error (mean bias) revealed effective accuracy across conditions with only a marginal distinction between monocular and binocular conditions. Therefore, within the task conditions of this experiment, it appears that binocular and monocular cues provide sufficient visual information for effective judgments of perceived reach at midline.
Remote visual analysis of large turbulence databases at multiple scales
Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...
2018-06-15
The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less
Modular Representation of Luminance Polarity In the Superficial Layers Of Primary Visual Cortex
Smith, Gordon B.; Whitney, David E.; Fitzpatrick, David
2016-01-01
Summary The spatial arrangement of luminance increments (ON) and decrements (OFF) falling on the retina provides a wealth of information used by central visual pathways to construct coherent representations of visual scenes. But how the polarity of luminance change is represented in the activity of cortical circuits remains unclear. Using wide-field epifluorescence and two-photon imaging we demonstrate a robust modular representation of luminance polarity (ON or OFF) in the superficial layers of ferret primary visual cortex. Polarity-specific domains are found with both uniform changes in luminance and single light/dark edges, and include neurons selective for orientation and direction of motion. The integration of orientation and polarity preference is evident in the selectivity and discrimination capabilities of most layer 2/3 neurons. We conclude that polarity selectivity is an integral feature of layer 2/3 neurons, ensuring that the distinction between light and dark stimuli is available for further processing in downstream extrastriate areas. PMID:26590348
What visual information is used for stereoscopic depth displacement discrimination?
Nefs, Harold T; Harris, Julie M
2010-01-01
There are two ways to detect a displacement in stereoscopic depth, namely by monitoring the change in disparity over time (CDOT) or by monitoring the interocular velocity difference (IOVD). Though previous studies have attempted to understand which cue is most significant for the visual system, none has designed stimuli that provide a comparison in terms of relative efficiency between them. Here we used two-frame motion and random-dot noise to deliver equivalent strengths of CDOT and IOVD information to the visual system. Using three kinds of random-dot stimuli, we were able to isolate CDOT or IOVD or deliver both simultaneously. The proportion of dots delivering CDOT or IOVD signals could be varied, and we defined the discrimination threshold as the proportion needed to detect the direction of displacement (towards or away). Thresholds were similar for stimuli containing CDOT only, and containing both CDOT and IOVD, but only one participant was able to consistently perceive the displacement for stimuli containing only IOVD. We also investigated the effect of disparity pedestals on discrimination. Performance was best when the displacement crossed the reference plane, but was not significantly different for stimuli containing CDOT only and those containing both CDOT and IOVD. When stimuli are specifically designed to provide equivalent two-frame motion or disparity-change, few participants can reliably detect displacement when IOVD is the only cue. This challenges the notion that IOVD is involved in the discrimination of direction of displacement in two-frame motion displays.
Effects of Visual Information on Wind-Evoked Escape Behavior of the Cricket, Gryllus bimaculatus.
Kanou, Masamichi; Matsuyama, Akane; Takuwa, Hiroyuki
2014-09-01
We investigated the effects of visual information on wind-evoked escape behavior in the cricket, Gryllus bimaculatus. Most agitated crickets were found to retreat into a shelter made of cardboard installed in the test arena within a short time. As this behavior was thought to be a type of escape, we confirmed how a visual image of a shelter affected wind-evoked escape behavior. Irrespective of the brightness of the visual background (black or white) or the absence or presence of a shelter, escape jumps were oriented almost 180° opposite to the source of the air puff stimulus. Therefore, the direction of wind-evoked escape depends solely depended on the direction of the stimulus air puff. In contrast, the turning direction of the crickets during the escape was affected by the position of the visual image of the shelter. During the wind-evoked escape jump, most crickets turned in the direction in which a shelter was presented. This behavioral nature is presumably necessary for crickets to retreat into a shelter within a short time after their escape jump.
A Rapid Subcortical Amygdala Route for Faces Irrespective of Spatial Frequency and Emotion.
McFadyen, Jessica; Mermillod, Martial; Mattingley, Jason B; Halász, Veronika; Garrido, Marta I
2017-04-05
There is significant controversy over the existence and function of a direct subcortical visual pathway to the amygdala. It is thought that this pathway rapidly transmits low spatial frequency information to the amygdala independently of the cortex, and yet the directionality of this function has never been determined. We used magnetoencephalography to measure neural activity while human participants discriminated the gender of neutral and fearful faces filtered for low or high spatial frequencies. We applied dynamic causal modeling to demonstrate that the most likely underlying neural network consisted of a pulvinar-amygdala connection that was uninfluenced by spatial frequency or emotion, and a cortical-amygdala connection that conveyed high spatial frequencies. Crucially, data-driven neural simulations revealed a clear temporal advantage of the subcortical connection over the cortical connection in influencing amygdala activity. Thus, our findings support the existence of a rapid subcortical pathway that is nonselective in terms of the spatial frequency or emotional content of faces. We propose that that the "coarseness" of the subcortical route may be better reframed as "generalized." SIGNIFICANCE STATEMENT The human amygdala coordinates how we respond to biologically relevant stimuli, such as threat or reward. It has been postulated that the amygdala first receives visual input via a rapid subcortical route that conveys "coarse" information, namely, low spatial frequencies. For the first time, the present paper provides direction-specific evidence from computational modeling that the subcortical route plays a generalized role in visual processing by rapidly transmitting raw, unfiltered information directly to the amygdala. This calls into question a widely held assumption across human and animal research that fear responses are produced faster by low spatial frequencies. Our proposed mechanism suggests organisms quickly generate fear responses to a wide range of visual properties, heavily implicating future research on anxiety-prevention strategies. Copyright © 2017 the authors 0270-6474/17/373864-11$15.00/0.
Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M
2015-03-01
Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.
Akhtar, Mehmooda; Ali, Zulfiqar; Hassan, Nelofar; Mehdi, Saqib; Wani, Gh Mohammad; Mir, Aabid Hussain
2017-01-01
Proper positioning of the head and neck is important for an optimal laryngeal visualization. Traditionally, sniffing position (SP) is recommended to provide a superior glottic visualization, during direct laryngoscopy, enhancing the ease of intubation. Various studies in the last decade of this belief have challenged the need for sniffing position during intubation. We conducted a prospective study comparing the sniffing head position with simple head extension to study the laryngoscopic view and intubation difficulty during direct laryngoscopy. Five-hundred patients were included in this study and randomly distributed to SP or simple head extension. In the sniffing group, an incompressible head ring was placed under the head to raise its height by 7 cm from the neutral plane followed by maximal extension of the head. In the simple extension group, no headrest was placed under the head; however, maximal head extension was given at the time of laryngoscopy. Various factors as ability to mask ventilate, laryngoscopic visualization, intubation difficulty, and posture of the anesthesiologist during laryngoscopy and tracheal intubation were noted. In the incidence of difficult laryngoscopy (Cormack Grade III and IV), Intubation Difficulty Scale (IDS score) was compared between the two groups. There was no significant difference between two groups in Cormack grades. The IDS score differed significantly between sniffing group and simple extension group ( P = 0.000) with an increased difficulty during intubation in the simple head extension. Patients with simple head extension needed more lifting force, increased use of external laryngeal manipulation, and an increased use of alternate techniques during intubation when compared to SP. We conclude that compared to the simple head extension position, the SP should be used as a standard head position for intubation attempts under general anesthesia.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
A novel examination of exposure patterns and posttraumatic stress after a university mass murder.
Liu, Sabrina R; Kia-Keating, Maryam
2018-03-05
Occurring at an alarming rate in the United States, mass violence has been linked to posttraumatic stress symptoms (PTSS) in both direct victims and community members who are indirectly exposed. Identifying what distinct exposure patterns exist and their relation to later PTSS has important clinical implications. The present study determined classes of exposure to an event of mass violence, and if PTSS differed across classes. First- and second-year college students (N = 1,189) participated in a confidential online survey following a mass murder at their university, which assessed event exposure and PTSS 3 months later. Latent class analysis (LCA) was used to empirically determine distinct classes of exposure patterns and links between class membership and PTSS. The final model yielded 4 classes: minimal exposure (55.5% of sample), auditory exposure (29.4% of sample), visual exposure (10% of sample), and interpersonal exposure (5% of sample). More severe direct exposure (i.e., the visual exposure class) was associated with significantly higher levels of PTSS than the auditory exposure or minimal exposure classes, as was the interpersonal exposure class. There were no significant differences in PTSS between the auditory exposure and minimal exposure classes or the visual exposure and interpersonal exposure classes. Results point to the differential impact of exposure categories, and provide empirical evidence for distinguishing among auditory, visual, and interpersonal exposures to events of mass violence on college campuses. Clinical implications suggest that visual and interpersonal exposure may warrant targeted efforts following mass violence. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cell replacement and visual restoration by retinal sheet transplants
Seiler, Magdalene J.; Aramant, Robert B.
2012-01-01
Retinal diseases such as age-related macular degeneration (ARMD) and retinitis pigmentosa (RP) affect millions of people. Replacing lost cells with new cells that connect with the still functional part of the host retina might repair a degenerating retina and restore eyesight to an unknown extent. A unique model, subretinal transplantation of freshly dissected sheets of fetal-derived retinal progenitor cells, combined with its retinal pigment epithelium (RPE), has demonstrated successful results in both animals and humans. Most other approaches are restricted to rescue endogenous retinal cells of the recipient in earlier disease stages by a ‘nursing’ role of the implanted cells and are not aimed at neural retinal cell replacement. Sheet transplants restore lost visual responses in several retinal degeneration models in the superior colliculus (SC) corresponding to the location of the transplant in the retina. They do not simply preserve visual performance – they increase visual responsiveness to light. Restoration of visual responses in the SC can be directly traced to neural cells in the transplant, demonstrating that synaptic connections between transplant and host contribute to the visual improvement. Transplant processes invade the inner plexiform layer of the host retina and form synapses with presumable host cells. In a Phase II trial of RP and ARMD patients, transplants of retina together with its RPE improved visual acuity. In summary, retinal progenitor sheet transplantation provides an excellent model to answer questions about how to repair and restore function of a degenerating retina. Supply of fetal donor tissue will always be limited but the model can set a standard and provide an informative base for optimal cell replacement therapies such as embryonic stem cell (ESC)-derived therapy. PMID:22771454
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy.
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-07-29
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy.
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-01-01
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy. PMID:27471000
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-07-01
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy.
How Ants Use Vision When Homing Backward.
Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine
2017-02-06
Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less
Visual Imagery without Visual Perception?
ERIC Educational Resources Information Center
Bertolo, Helder
2005-01-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
The influence of attention, learning, and motivation on visual search.
Dodd, Michael D; Flowers, John H
2012-01-01
The 59th Annual Nebraska Symposium on Motivation (The Influence of Attention, Learning, and Motivation on Visual Search) took place April 7-8, 2011, on the University of Nebraska-Lincoln campus. The symposium brought together leading scholars who conduct research related to visual search at a variety levels for a series of talks, poster presentations, panel discussions, and numerous additional opportunities for intellectual exchange. The Symposium was also streamed online for the first time in the history of the event, allowing individuals from around the world to view the presentations and submit questions. The present volume is intended to both commemorate the event itself and to allow our speakers additional opportunity to address issues and current research that have since arisen. Each of the speakers (and, in some cases, their graduate students and post docs) has provided a chapter which both summarizes and expands on their original presentations. In this chapter, we sought to a) provide additional context as to how the Symposium came to be, b) discuss why we thought that this was an ideal time to organize a visual search symposium, and c) to briefly address recent trends and potential future directions in the field. We hope you find the volume both enjoyable and informative, and we thank the authors who have contributed a series of engaging chapters.
Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Qiong; Gluch, Jürgen; Krüger, Peter
A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have amore » direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. - Highlights: • The unstained whole pine pollen was visualized by high-resolution laboratory-based HXRM for the first time. • The comparison study of pollen grains by LM, SEM and high-resolution laboratory-based HXRM. • Phase contrast imaging provides significantly higher contrast of the raw images compared to absorption contrast imaging. • Surface and internal structure of the pine pollen including exine, intine and cellular structures are clearly visualized. • 3D volume data of unstained whole pollen grains are acquired and the specific volumes of the different layer are calculated.« less
Intuitive tactile zooming for graphics accessed by individuals who are blind and visually impaired.
Rastogi, Ravi; Pawluk, T V Dianne; Ketchum, Jessica
2013-07-01
One possibility of providing access to visual graphics for those who are visually impaired is to present them tactually: unfortunately, details easily available to vision need to be magnified to be accessible through touch. For this, we propose an "intuitive" zooming algorithm to solve potential problems with directly applying visual zooming techniques to haptic displays that sense the current location of a user on a virtual diagram with a position sensor and, then, provide the appropriate local information either through force or tactile feedback. Our technique works by determining and then traversing the levels of an object tree hierarchy of a diagram. In this manner, the zoom steps adjust to the content to be viewed, avoid clipping and do not zoom when no object is present. The algorithm was tested using a small, "mouse-like" display with tactile feedback on pictures representing houses in a community and boats on a lake. We asked the users to answer questions related to details in the pictures. Comparing our technique to linear and logarithmic step zooming, we found a significant increase in the correctness of the responses (odds ratios of 2.64:1 and 2.31:1, respectively) and usability (differences of 36% and 19%, respectively) using our "intuitive" zooming technique.
Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude
2016-01-01
Every human cognitive function, such as visual object recognition, is realized in a complex spatio-temporal activity pattern in the brain. Current brain imaging techniques in isolation cannot resolve the brain's spatio-temporal dynamics, because they provide either high spatial or temporal resolution but not both. To overcome this limitation, we developed an integration approach that uses representational similarities to combine measurements of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to yield a spatially and temporally integrated characterization of neuronal activation. Applying this approach to 2 independent MEG–fMRI data sets, we observed that neural activity first emerged in the occipital pole at 50–80 ms, before spreading rapidly and progressively in the anterior direction along the ventral and dorsal visual streams. Further region-of-interest analyses established that dorsal and ventral regions showed MEG–fMRI correspondence in representations later than early visual cortex. Together, these results provide a novel and comprehensive, spatio-temporally resolved view of the rapid neural dynamics during the first few hundred milliseconds of object vision. They further demonstrate the feasibility of spatially unbiased representational similarity-based fusion of MEG and fMRI, promising new insights into how the brain computes complex cognitive functions. PMID:27235099
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
ERIC Educational Resources Information Center
Marshall, Lindsey; Meachem, Lester
2007-01-01
In this scoping study we have investigated the integration of subject-specific software into the structure of visual communications courses. There is a view that the response within visual communications courses to the rapid developments in technology has been linked to necessity rather than by design. Through perceptions of staff with day-to-day…
Representational Account of Memory: Insights from Aging and Synesthesia.
Pfeifer, Gaby; Ward, Jamie; Chan, Dennis; Sigala, Natasha
2016-12-01
The representational account of memory envisages perception and memory to be on a continuum rather than in discretely divided brain systems [Bussey, T. J., & Saksida, L. M. Memory, perception, and the ventral visual-perirhinal-hippocampal stream: Thinking outside of the boxes. Hippocampus, 17, 898-908, 2007]. We tested this account using a novel between-group design with young grapheme-color synesthetes, older adults, and young controls. We investigated how the disparate sensory-perceptual abilities between these groups translated into associative memory performance for visual stimuli that do not induce synesthesia. ROI analyses of the entire ventral visual stream showed that associative retrieval (a pair-associate retrieved in the absence of a visual stimulus) yielded enhanced activity in young and older adults' visual regions relative to synesthetes, whereas associative recognition (deciding whether a visual stimulus was the correct pair-associate) was characterized by enhanced activity in synesthetes' visual regions relative to older adults. Whole-brain analyses at associative retrieval revealed an effect of age in early visual cortex, with older adults showing enhanced activity relative to synesthetes and young adults. At associative recognition, the group effect was reversed: Synesthetes showed significantly enhanced activity relative to young and older adults in early visual regions. The inverted group effects observed between retrieval and recognition indicate that reduced sensitivity in visual cortex (as in aging) comes with increased activity during top-down retrieval and decreased activity during bottom-up recognition, whereas enhanced sensitivity (as in synesthesia) shows the opposite pattern. Our results provide novel evidence for the direct contribution of perceptual mechanisms to visual associative memory based on the examples of synesthesia and aging.
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perea, Daniel E.; Liu, Jia; Bartrand, Jonah A. G.
In this study, we report the atomic-scale analysis of biological interfaces using atom probe tomography. Embedding the protein ferritin in an organic polymer resin lacking nitrogen provided chemical contrast to visualize atomic distributions and distinguish organic-organic and organic-inorganic interfaces. The sample preparation method can be directly extended to further enhance the study of biological, organic and inorganic nanomaterials relevant to health, energy or the environment.
A Critical Review of the Use of Virtual Reality in Construction Engineering Education and Training.
Wang, Peng; Wu, Peng; Wang, Jun; Chi, Hung-Lin; Wang, Xiangyu
2018-06-08
Virtual Reality (VR) has been rapidly recognized and implemented in construction engineering education and training (CEET) in recent years due to its benefits of providing an engaging and immersive environment. The objective of this review is to critically collect and analyze the VR applications in CEET, aiming at all VR-related journal papers published from 1997 to 2017. The review follows a three-stage analysis on VR technologies, applications and future directions through a systematic analysis. It is found that the VR technologies adopted for CEET evolve over time, from desktop-based VR, immersive VR, 3D game-based VR, to Building Information Modelling (BIM)-enabled VR. A sibling technology, Augmented Reality (AR), for CEET adoptions has also emerged in recent years. These technologies have been applied in architecture and design visualization, construction health and safety training, equipment and operational task training, as well as structural analysis. Future research directions, including the integration of VR with emerging education paradigms and visualization technologies, have also been provided. The findings are useful for both researchers and educators to usefully integrate VR in their education and training programs to improve the training performance.
Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2016-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486
Direct neural pathways convey distinct visual information to Drosophila mushroom bodies
Vogt, Katrin; Aso, Yoshinori; Hige, Toshihide; Knapek, Stephan; Ichinose, Toshiharu; Friedrich, Anja B; Turner, Glenn C; Rubin, Gerald M; Tanimoto, Hiromu
2016-01-01
Previously, we demonstrated that visual and olfactory associative memories of Drosophila share mushroom body (MB) circuits (Vogt et al., 2014). Unlike for odor representation, the MB circuit for visual information has not been characterized. Here, we show that a small subset of MB Kenyon cells (KCs) selectively responds to visual but not olfactory stimulation. The dendrites of these atypical KCs form a ventral accessory calyx (vAC), distinct from the main calyx that receives olfactory input. We identified two types of visual projection neurons (VPNs) directly connecting the optic lobes and the vAC. Strikingly, these VPNs are differentially required for visual memories of color and brightness. The segregation of visual and olfactory domains in the MB allows independent processing of distinct sensory memories and may be a conserved form of sensory representations among insects. DOI: http://dx.doi.org/10.7554/eLife.14009.001 PMID:27083044
Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten
2014-01-01
The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions. PMID:24847243
NASA Astrophysics Data System (ADS)
Vinci, Matteo; Lipizer, Marina; Giorgetti, Alessandra
2016-04-01
The European Marine Observation and Data Network (EMODnet) initiative has the following purposes: to assemble marine metadata, data and products, to make these fragmented resources more easily available to public and private users and to provide quality-assured, standardised and harmonised marine data. EMODnet Chemistry was launched by DG MARE in 2009 to support the Marine Strategy Framework Directive (MSFD) requirements for the assessment of eutrophication and contaminants, following INSPIRE Directive rules. The aim is twofold: the first task is to make available and reusable the big amount of fragmented and inaccessible data, hosted in the European research institutes and environmental agencies. The second objective is to develop visualization services useful for the tasks of the MSFD. The technical set-up is based on the principle of adopting and adapting the SeaDataNet infrastructure for ocean and marine data which are managed by National Oceanographic Data Centers and relies on a distributed network of data centers. Data centers contribute to data harvesting and enrichment with the relevant metadata. Data are processed into interoperable formats (using agreed standards ISO XML, ODV) with the use of common vocabularies and standardized quality control procedures .Data quality control is a key issue when merging heterogeneous data coming from different sources and a data validation loop has been agreed within EMODnet Chemistry community and is routinely performed. After data quality control done by the regional coordinators of the EU marine basins (Atlantic, Baltic, North, Mediterranean and Black Sea), validated regional datasets are used to develop data products useful for the requirements of the MSFD. EMODnet Chemistry provides interpolated seasonal maps of nutrients and services for the visualization of time series and profiles of several chemical parameters. All visualization services are developed following OGC standards as WMS and WPS. In order to test new strategies for data storage, reanalysis and to upgrade the infrastructure performances, EMODnet Chemistry has chosen the Cloud environment offered by Cineca (the Consortium of Italian Universities and research institutes) where both regional aggregated datasets and analysis and visualization services are hosted. Finally, beside the delivery of data and the visualization products, the results of the data harvesting provide a useful tool to identify data gaps where the future monitoring efforts should be focused.
Ecological Interface Design for Computer Network Defense.
Bennett, Kevin B; Bryant, Adam; Sushereba, Christen
2018-05-01
A prototype ecological interface for computer network defense (CND) was developed. Concerns about CND run high. Although there is a vast literature on CND, there is some indication that this research is not being translated into operational contexts. Part of the reason may be that CND has historically been treated as a strictly technical problem, rather than as a socio-technical problem. The cognitive systems engineering (CSE)/ecological interface design (EID) framework was used in the analysis and design of the prototype interface. A brief overview of CSE/EID is provided. EID principles of design (i.e., direct perception, direct manipulation and visual momentum) are described and illustrated through concrete examples from the ecological interface. Key features of the ecological interface include (a) a wide variety of alternative visual displays, (b) controls that allow easy, dynamic reconfiguration of these displays, (c) visual highlighting of functionally related information across displays, (d) control mechanisms to selectively filter massive data sets, and (e) the capability for easy expansion. Cyber attacks from a well-known data set are illustrated through screen shots. CND support needs to be developed with a triadic focus (i.e., humans interacting with technology to accomplish work) if it is to be effective. Iterative design and formal evaluation is also required. The discipline of human factors has a long tradition of success on both counts; it is time that HF became fully involved in CND. Direct application in supporting cyber analysts.
Senot, Patrice; Zago, Myrka; Lacquaniti, Francesco; McIntyre, Joseph
2005-12-01
Intercepting an object requires a precise estimate of its time of arrival at the interception point (time to contact or "TTC"). It has been proposed that knowledge about gravitational acceleration can be combined with first-order, visual-field information to provide a better estimate of TTC when catching falling objects. In this experiment, we investigated the relative role of visual and nonvisual information on motor-response timing in an interceptive task. Subjects were immersed in a stereoscopic virtual environment and asked to intercept with a virtual racket a ball falling from above or rising from below. The ball moved with different initial velocities and could accelerate, decelerate, or move at a constant speed. Depending on the direction of motion, the acceleration or deceleration of the ball could therefore be congruent or not with the acceleration that would be expected due to the force of gravity acting on the ball. Although the best success rate was observed for balls moving at a constant velocity, we systematically found a cross-effect of ball direction and acceleration on success rate and response timing. Racket motion was triggered on average 25 ms earlier when the ball fell from above than when it rose from below, whatever the ball's true acceleration. As visual-flow information was the same in both cases, this shift indicates an influence of the ball's direction relative to gravity on response timing, consistent with the anticipation of the effects of gravity on the flight of the ball.
Visualization of High-Resolution LiDAR Topography in Google Earth
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Arrowsmith, R.; Blair, J. L.
2009-12-01
The growing availability of high-resolution LiDAR (Light Detection And Ranging) topographic data has proven to be revolutionary for Earth science research. These data allow scientists to study the processes acting on the Earth’s surfaces at resolutions not previously possible yet essential for their appropriate representation. In addition to their utility for research, the data have also been recognized as powerful tools for communicating earth science concepts for education and outreach purposes. Unfortunately, the massive volume of data produced by LiDAR mapping technology can be a barrier to their use. To facilitate access to these powerful data for research and educational purposes, we have been exploring the use of Keyhole Markup Language (KML) and Google Earth to deliver LiDAR-derived visualizations. The OpenTopography Portal (http://www.opentopography.org/) is a National Science Foundation-funded facility designed to provide access to Earth science-oriented LiDAR data. OpenTopography hosts a growing collection of LiDAR data for a variety of geologic domains, including many of the active faults in the western United States. We have found that the wide spectrum of LiDAR users have variable scientific applications, computing resources, and technical experience and thus require a data distribution system that provides various levels of access to the data. For users seeking a synoptic view of the data, and for education and outreach purposes, delivering full-resolution images derived from LiDAR topography into the Google Earth virtual globe is powerful. The virtual globe environment provides a freely available and easily navigated viewer and enables quick integration of the LiDAR visualizations with imagery, geographic layers, and other relevant data available in KML format. Through region-dependant network linked KML, OpenTopography currently delivers over 20 GB of LiDAR-derived imagery to users via simple, easily downloaded KMZ files hosted at the Portal. This method provides seamlessly access to hillshaded imagery for both bare earth and first return terrain models with various angles of illumination. Seamless access to LiDAR-derived imagery in Google Earth has proven to be the most popular product available in the OpenTopography Portal. The hillshade KMZ files have been downloaded over 3000 times by users ranging from earthquake scientists to K-12 educators who wish to introduce cutting edge real world data into their earth science lessons. OpenTopography also provides dynamically generated KMZ visualizations of LiDAR data products produced when users choose to use the OpenTopography point cloud access and processing system. These Google Earth compatible products allow users to quickly visualize the custom terrain products they have generated without the burden of loading the data into a GIS environment. For users who have installed the Google Earth browser plug-in, these visualizations can be launched directly from the OpenTopography results page and viewed directly in the browser.
Imaging-related medications: a class overview
2007-01-01
Imaging-related medications (contrast agents) are commonly utilized to improve visualization of radiographic, computed tomography (CT), and magnetic resonance (MR) images. While traditional medications are used specifically for their pharmacological actions, the ideal imaging agent provides enhanced contrast with little biological interaction. The radiopaque agents, barium sulfate and iodinated contrast agents, confer “contrast” to x-ray films by their physical ability to directly absorb x-rays. Gadolinium-based MR agents enhance visualization of tissues when exposed to a magnetic field. Ferrous-ferric oxide–based paramagnetic agents provide negative contrast for MR liver studies. This article provides an overview of clinically relevant information for the imaging-related medications commonly in use. It reviews the safety improvements in new generations of drugs; risk factors and precautions for the reduction of severe adverse reactions (i.e., extravasation, contrast-induced nephropathy, metformin-induced lactic acidosis, and nephrogenic fibrosing dermopathy/nephrogenic systemic fibrosis); and the significance of diligent patient screening before contrast exposure and appropriate monitoring after exposure. PMID:17948119
Visualizing polynucleotide polymerase machines at work
Steitz, Thomas A
2006-01-01
The structures of T7 RNA polymerase (T7 RNAP) captured in the initiation and elongation phases of transcription, that of φ29 DNA polymerase bound to a primer protein and those of the multisubunit RNAPs bound to initiating factors provide insights into how these proteins can initiate RNA synthesis and synthesize 6–10 nucleotides while remaining bound to the site of initiation. Structural insight into the translocation of the product transcript and the separation of the downstream duplex DNA is provided by the structures of the four states of nucleotide incorporation. Single molecule and biochemical studies show a distribution of primer terminus positions that is altered by the binding of NTP and PPi ligands. This article reviews the insights that imaging the structure of polynucleotide polymerases at different steps of the polymerization reaction has provided on the mechanisms of the polymerization reaction. Movies are shown that allow the direct visualization of the conformational changes that the polymerases undergo during the different steps of polymerization. PMID:16900098
A smart sensor architecture based on emergent computation in an array of outer-totalistic cells
NASA Astrophysics Data System (ADS)
Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred
2005-06-01
A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.
Microfluidic Model Porous Media: Fabrication and Applications.
Anbari, Alimohammad; Chien, Hung-Ta; Datta, Sujit S; Deng, Wen; Weitz, David A; Fan, Jing
2018-05-01
Complex fluid flow in porous media is ubiquitous in many natural and industrial processes. Direct visualization of the fluid structure and flow dynamics is critical for understanding and eventually manipulating these processes. However, the opacity of realistic porous media makes such visualization very challenging. Micromodels, microfluidic model porous media systems, have been developed to address this challenge. They provide a transparent interconnected porous network that enables the optical visualization of the complex fluid flow occurring inside at the pore scale. In this Review, the materials and fabrication methods to make micromodels, the main research activities that are conducted with micromodels and their applications in petroleum, geologic, and environmental engineering, as well as in the food and wood industries, are discussed. The potential applications of micromodels in other areas are also discussed and the key issues that should be addressed in the near future are proposed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An indoor navigation system for the visually impaired.
Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F
2012-01-01
Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.
Laparoscopic access with a visualizing trocar.
Wolf, J S
1997-01-01
Although useful in most situations, there are several inherent disadvantages of the standard laparoscopic access techniques of Veress needle insertion and Hasson-type cannula placement. Veress needle placement may be hazardous in patients at high risk for intraabdominal adhesions and difficult in patients who are obese. The usual alternative, the Hasson-type cannula, often does not provide a good gas seal. As another option, the use of a visualizing trocar (OPTIVIEW) has proven to be effective in the initial experience at the University of Michigan. The inner trocar of the visualizing trocar is hollow except for a clear plastic conical tip with two external ridges. The trocar-cannula assembly is passed through tissue layers to enter the operative space under direct vision from a 10-mm zero-degree laparoscope placed into the trocar. Results suggest that this technique is an excellent alternative to Veress needle placement when laparoscopic access is likely to be hazardous or difficult.
Dichoptic training enables the adult amblyopic brain to learn.
Li, Jinrong; Thompson, Benjamin; Deng, Daming; Chan, Lily Y L; Yu, Minbin; Hess, Robert F
2013-04-22
Adults with amblyopia, a common visual cortex disorder caused primarily by binocular disruption during an early critical period, do not respond to conventional therapy involving occlusion of one eye. But it is now clear that the adult human visual cortex has a significant degree of plasticity, suggesting that something must be actively preventing the adult brain from learning to see through the amblyopic eye. One possibility is an inhibitory signal from the contralateral eye that suppresses cortical inputs from the amblyopic eye. Such a gating mechanism could explain the apparent lack of plasticity within the adult amblyopic visual cortex. Here we provide direct evidence that alleviating suppression of the amblyopic eye through dichoptic stimulus presentation induces greater levels of plasticity than forced use of the amblyopic eye alone. This indicates that suppression is a key gating mechanism that prevents the amblyopic brain from learning to see. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, Mengdi; Zhang, Xiao-Sheng; Sun, Xuming; Meng, Bo; Liu, Wen; Zhang, Haixia
2014-04-01
The triboelectric nanogenerator (TENG) is a promising device in energy harvesting and self-powered sensing. In this work, we demonstrate a magnetic-assisted TENG, utilizing the magnetic force for electric generation. Maximum power density of 541.1 mW/m2 is obtained at 16.67 MΩ for the triboelectric part, while the electromagnetic part can provide power density of 649.4 mW/m2 at 16 Ω. Through theoretical calculation and experimental measurement, linear relationship between the tilt angle and output voltage at large angles is observed. On this basis, a self-powered omnidirectional tilt sensor is realized by two magnetic-assisted TENGs, which can measure the magnitude and direction of the tilt angle at the same time. For visualized sensing of the tilt angle, a sensing system is established, which is portable, intuitive, and self-powered. This visualized system greatly simplifies the measure process, and promotes the development of self-powered systems.
Accessing long-term memory representations during visual change detection.
Beck, Melissa R; van Lamsweerde, Amanda E
2011-04-01
In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
Tracking the allocation of attention using human pupillary oscillations
Naber, Marnix; Alvarez, George A.; Nakayama, Ken
2013-01-01
The muscles that control the pupil are richly innervated by the autonomic nervous system. While there are central pathways that drive pupil dilations in relation to arousal, there is no anatomical evidence that cortical centers involved with visual selective attention innervate the pupil. In this study, we show that such connections must exist. Specifically, we demonstrate a novel Pupil Frequency Tagging (PFT) method, where oscillatory changes in stimulus brightness over time are mirrored by pupil constrictions and dilations. We find that the luminance–induced pupil oscillations are enhanced when covert attention is directed to the flicker stimulus and when targets are correctly detected in an attentional tracking task. These results suggest that the amplitudes of pupil responses closely follow the allocation of focal visual attention and the encoding of stimuli. PFT provides a new opportunity to study top–down visual attention itself as well as identifying the pathways and mechanisms that support this unexpected phenomenon. PMID:24368904
Bosworth, Rain G.; Petrich, Jennifer A.; Dobkins, Karen R.
2012-01-01
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream. PMID:22051893
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi; Kodera, Masako; Suguro, Kyoichi; Miyashita, Naoto
2017-09-19
Quantitative detection of defects in atomic structures is of great significance to evaluating product quality and exploring quality improvement process. In this study, a Fourier transform filtered sampling Moire technique was proposed to visualize and detect defects in atomic arrays in a large field of view. Defect distributions, defect numbers and defect densities could be visually and quantitatively determined from a single atomic structure image at low cost. The effectiveness of the proposed technique was verified from numerical simulations. As an application, the dislocation distributions in a GaN/AlGaN atomic structure in two directions were magnified and displayed in Moire phase maps, and defect locations and densities were detected automatically. The proposed technique is able to provide valuable references to material scientists and engineers by checking the effect of various treatments for defect reduction. © 2017 IOP Publishing Ltd.
Figure-ground activity in V1 and guidance of saccadic eye movements.
Supèr, Hans
2006-01-01
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Contextual effects on smooth-pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-02-01
Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.
Distributions of experimental protein structures on coarse-grained free energy landscapes
Liu, Jie; Jernigan, Robert L.
2015-01-01
Predicting conformational changes of proteins is needed in order to fully comprehend functional mechanisms. With the large number of available structures in sets of related proteins, it is now possible to directly visualize the clusters of conformations and their conformational transitions through the use of principal component analysis. The most striking observation about the distributions of the structures along the principal components is their highly non-uniform distributions. In this work, we use principal component analysis of experimental structures of 50 diverse proteins to extract the most important directions of their motions, sample structures along these directions, and estimate their free energy landscapes by combining knowledge-based potentials and entropy computed from elastic network models. When these resulting motions are visualized upon their coarse-grained free energy landscapes, the basis for conformational pathways becomes readily apparent. Using three well-studied proteins, T4 lysozyme, serum albumin, and sarco-endoplasmic reticular Ca2+ adenosine triphosphatase (SERCA), as examples, we show that such free energy landscapes of conformational changes provide meaningful insights into the functional dynamics and suggest transition pathways between different conformational states. As a further example, we also show that Monte Carlo simulations on the coarse-grained landscape of HIV-1 protease can directly yield pathways for force-driven conformational changes. PMID:26723638
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.
Jung, Sang-Kyu; McDonald, Karen
2011-08-16
Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization
2011-01-01
Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353
Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering
NASA Astrophysics Data System (ADS)
Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi
2015-04-01
This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
Flom, Ross; Johnson, Sarah
2011-03-01
Between 12- and 14 months of age infants begin to use another's direction of gaze and affective expression in learning about various objects and events. What is not well understood is how long infants' behaviour towards a previously unfamiliar object continues to be influenced following their participation in circumstances of social referencing. In this experiment, we examined infants' sensitivity to an adult's direction of gaze and their visual preference for one of two objects following a 5-min, 1-day, or 1-month delay. Ninety-six 12-month-olds participated. For half of the infants during habituation (i.e., familiarization), the adults' direction of gaze was directed towards an unfamiliar object (look condition). For the remaining half of the infants during habituation, the adults' direction of gaze was directed away from the unfamiliar object (look-away condition). All infants were habituated to two events. One event consisted of an adult looking towards (look condition) or away from (look-away condition) an object while facially and vocally conveying a positive affective expression. The second event consisted of the same adult looking towards or away from a different object while conveying a disgusted affective expression. Following the habituation phase and a 5-min, 1-day, or 1-month delay, infants' visual preference was assessed. During the visual preference phase, infants saw the two objects side by side where the adult conveying the affective expression was not visible. Results of the visual preference phase indicate that infants in the look condition showed a significant preference for object previously paired with the positive affect following a 5-min and 1-day delay. No significant visual preference was found in the look condition following a 1-month delay. No significant preferences were found at any retention interval in the look-away condition. Results are discussed in terms of early learning, social referencing, and early memory. ©2010 The British Psychological Society.
Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G
2016-05-01
The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of CS, FP, and MS could be used as selection criteria to improve HCW, BF, and LMA. The use of genomic information permitted the detection of greater additive genetic variability for LMA and BF. For HCW, the high magnitude of the genetic correlations with visual scores was probably sufficient to recover genetic variability. The methods provided similar breeding value accuracies, especially for the visual scores.
Can understanding the neurobiology of body dysmorphic disorder (BDD) inform treatment?
Rossell, Susan L; Harrison, Ben J; Castle, David
2015-08-01
We aim to provide a clinically focused review of the neurobiological literature in body dysmorphic disorder (BDD), with a focus on structural and functional neuroimaging. There has been a recent influx of studies examining the underlying neurobiology of BDD using structural and functional neuroimaging methods. Despite obvious symptom similarities with obsessive-compulsive disorder (OCD), no study to date has directly compared the two groups using neuroimaging techniques. Studies have established that there are limbic and visual cortex abnormalities in BDD, in contrast to fronto-striatal differences in OCD. Such data suggests affect or visual training maybe useful in BDD. © The Royal Australian and New Zealand College of Psychiatrists 2015.
de la Vega de León, Antonio; Bajorath, Jürgen
2016-09-01
The concept of chemical space is of fundamental relevance for medicinal chemistry and chemical informatics. Multidimensional chemical space representations are coordinate-based. Chemical space networks (CSNs) have been introduced as a coordinate-free representation. A computational approach is presented for the transformation of multidimensional chemical space into CSNs. The design of transformation CSNs (TRANS-CSNs) is based upon a similarity function that directly reflects distance relationships in original multidimensional space. TRANS-CSNs provide an immediate visualization of coordinate-based chemical space and do not require the use of dimensionality reduction techniques. At low network density, TRANS-CSNs are readily interpretable and make it possible to evaluate structure-activity relationship information originating from multidimensional chemical space.
Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark
Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.
2014-01-01
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475
Applications of multiphoton microscopy in the field of colorectal cancer
NASA Astrophysics Data System (ADS)
Wang, Shu; Li, Lianhuang; Zhu, Xiaoqin; Zheng, Liqin; Zhuo, Shuangmu; Chen, Jianxin
2018-06-01
Multiphoton microscopy (MPM) is a powerful tool for visualizing cellular and subcellular details within living tissue by its unique advantages of being label-free, its intrinsic optical sectioning ability, near-infrared excitation for deep penetration depth into tissue, reduced photobleaching and phototoxicity in the out-of-focus regions, and being capable of providing quantitative information. In this review, we focus on applications of MPM in the field of colorectal cancer, including monitoring cancer progression, detecting tumor metastasis and microenvironment, evaluating the cancer therapy response, and visualizing and ablating pre-invasive cancer cells. We also present one of the major challenges and the future research direction to exploit a colorectal multiphoton endoscope.
Visualization of risk of radiogenic second cancer in the organs and tissues of the human body.
Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D
2015-04-28
Radiogenic second cancer is a common late effect in long term cancer survivors. Currently there are few methods or tools available to visually evaluate the spatial distribution of risks of radiogenic late effects in the human body. We developed a risk visualization method and demonstrated it for radiogenic second cancers in tissues and organs of one patient treated with photon volumetric modulated arc therapy and one patient treated with proton craniospinal irradiation. Treatment plans were generated using radiotherapy treatment planning systems (TPS) and dose information was obtained from TPS. Linear non-threshold risk coefficients for organs at risk of second cancer incidence were taken from the Biological Effects of Ionization Radiation VII report. Alternative risk models including linear exponential model and linear plateau model were also examined. The predicted absolute lifetime risk distributions were visualized together with images of the patient anatomy. The risk distributions of second cancer for the two patients were visually presented. The risk distributions varied with tissue, dose, dose-risk model used, and the risk distribution could be similar to or very different from the dose distribution. Our method provides a convenient way to directly visualize and evaluate the risks of radiogenic second cancer in organs and tissues of the human body. In the future, visual assessment of risk distribution could be an influential determinant for treatment plan scoring.
Heiberg, Thomas; Hagen, Espen; Halnes, Geir; Einevoll, Gaute T
2016-05-01
Despite its prominent placement between the retina and primary visual cortex in the early visual pathway, the role of the dorsal lateral geniculate nucleus (dLGN) in molding and regulating the visual signals entering the brain is still poorly understood. A striking feature of the dLGN circuit is that relay cells (RCs) and interneurons (INs) form so-called triadic synapses, where an IN dendritic terminal can be simultaneously postsynaptic to a retinal ganglion cell (GC) input and presynaptic to an RC dendrite, allowing for so-called triadic inhibition. Taking advantage of a recently developed biophysically detailed multicompartmental model for an IN, we here investigate putative effects of these different inhibitory actions of INs, i.e., triadic inhibition and standard axonal inhibition, on the response properties of RCs. We compute and investigate so-called area-response curves, that is, trial-averaged visual spike responses vs. spot size, for circular flashing spots in a network of RCs and INs. The model parameters are grossly tuned to give results in qualitative accordance with previous in vivo data of responses to such stimuli for cat GCs and RCs. We particularly investigate how the model ingredients affect salient response properties such as the receptive-field center size of RCs and INs, maximal responses and center-surround antagonisms. For example, while triadic inhibition not involving firing of IN action potentials was found to provide only a non-linear gain control of the conversion of input spikes to output spikes by RCs, axonal inhibition was in contrast found to substantially affect the receptive-field center size: the larger the inhibition, the more the RC center size shrinks compared to the GC providing the feedforward excitation. Thus, a possible role of the different inhibitory actions from INs to RCs in the dLGN circuit is to provide separate mechanisms for overall gain control (direct triadic inhibition) and regulation of spatial resolution (axonal inhibition) of visual signals sent to cortex.
Bailey, James A; Casanova, Ruby S; Bufkin, Kim
2006-07-01
In using infrared or infrared-enhanced photography to examine gunshot residue (GSR) on dark-colored clothing, the GSR particles are microscopically examined directly on the fabric followed by the modified Griess test (MGT) for nitrites. In conducting the MGT, the GSR is transferred to treated photographic paper for visualization. A positive reaction yields an orange color on specially treated photographic paper. The examiner also evaluates the size of the powder pattern based on the distribution of nitrite reaction sites or density. A false-positive reaction can occur using the MGT due to contaminants or dyes that produce an orange cloud reaction as well. A method for enhancing visualization of the pattern produced by burned and partially unburned powder is by treatment of the fabric with a solution of sodium hypochlorite. In order to evaluate the results of sodium hypochlorite treatment for GSR visualization, the MGT was used as a reference pattern. Enhancing GSR patterns on dark or multicolored clothing was performed by treating the fabric with an application of 5.25% solution of sodium hypochlorite. Bleaching the dyes in the fabric enhances visualization of the GSR pattern by eliminating the background color. Some dyes are not affected by sodium hypochlorite; therefore, bleaching may not enhance the GSR patterns in some fabrics. Sodium hypochlorite provides the investigator with a method for enhancing GSR patterns directly on the fabric. However, this study is not intended to act as a substitute for the MGT or Sodium Rhodizonate test.
Lateral Spread of Orientation Selectivity in V1 is Controlled by Intracortical Cooperativity
Chavane, Frédéric; Sharon, Dahlia; Jancke, Dirk; Marre, Olivier; Frégnac, Yves; Grinvald, Amiram
2011-01-01
Neurons in the primary visual cortex receive subliminal information originating from the periphery of their receptive fields (RF) through a variety of cortical connections. In the cat primary visual cortex, long-range horizontal axons have been reported to preferentially bind to distant columns of similar orientation preferences, whereas feedback connections from higher visual areas provide a more diverse functional input. To understand the role of these lateral interactions, it is crucial to characterize their effective functional connectivity and tuning properties. However, the overall functional impact of cortical lateral connections, whatever their anatomical origin, is unknown since it has never been directly characterized. Using direct measurements of postsynaptic integration in cat areas 17 and 18, we performed multi-scale assessments of the functional impact of visually driven lateral networks. Voltage-sensitive dye imaging showed that local oriented stimuli evoke an orientation-selective activity that remains confined to the cortical feedforward imprint of the stimulus. Beyond a distance of one hypercolumn, the lateral spread of cortical activity gradually lost its orientation preference approximated as an exponential with a space constant of about 1 mm. Intracellular recordings showed that this loss of orientation selectivity arises from the diversity of converging synaptic input patterns originating from outside the classical RF. In contrast, when the stimulus size was increased, we observed orientation-selective spread of activation beyond the feedforward imprint. We conclude that stimulus-induced cooperativity enhances the long-range orientation-selective spread. PMID:21629708
Sexual Orientation-Related Differences in Virtual Spatial Navigation and Spatial Search Strategies.
Rahman, Qazi; Sharp, Jonathan; McVeigh, Meadhbh; Ho, Man-Ling
2017-07-01
Spatial abilities are generally hypothesized to differ between men and women, and people with different sexual orientations. According to the cross-sex shift hypothesis, gay men are hypothesized to perform in the direction of heterosexual women and lesbian women in the direction of heterosexual men on cognitive tests. This study investigated sexual orientation differences in spatial navigation and strategy during a virtual Morris water maze task (VMWM). Forty-four heterosexual men, 43 heterosexual women, 39 gay men, and 34 lesbian/bisexual women (aged 18-54 years) navigated a desktop VMWM and completed measures of intelligence, handedness, and childhood gender nonconformity (CGN). We quantified spatial learning (hidden platform trials), probe trial performance, and cued navigation (visible platform trials). Spatial strategies during hidden and probe trials were classified into visual scanning, landmark use, thigmotaxis/circling, and enfilading. In general, heterosexual men scored better than women and gay men on some spatial learning and probe trial measures and used more visual scan strategies. However, some differences disappeared after controlling for age and estimated IQ (e.g., in visual scanning heterosexual men differed from women but not gay men). Heterosexual women did not differ from lesbian/bisexual women. For both sexes, visual scanning predicted probe trial performance. More feminine CGN scores were associated with lower performance among men and greater performance among women on specific spatial learning or probe trial measures. These results provide mixed evidence for the cross-sex shift hypothesis of sexual orientation-related differences in spatial cognition.
FGF /FGFR Signal Induces Trachea Extension in the Drosophila Visual System
Chu, Wei-Chen; Lee, Yuan-Ming; Henry Sun, Yi
2013-01-01
The Drosophila compound eye is a large sensory organ that places a high demand on oxygen supplied by the tracheal system. Although the development and function of the Drosophila visual system has been extensively studied, the development and contribution of its tracheal system has not been systematically examined. To address this issue, we studied the tracheal patterns and developmental process in the Drosophila visual system. We found that the retinal tracheae are derived from air sacs in the head, and the ingrowth of retinal trachea begin at mid-pupal stage. The tracheal development has three stages. First, the air sacs form near the optic lobe in 42-47% of pupal development (pd). Second, in 47-52% pd, air sacs extend branches along the base of the retina following a posterior-to-anterior direction and further form the tracheal network under the fenestrated membrane (TNUFM). Third, the TNUFM extend fine branches into the retina following a proximal-to-distal direction after 60% pd. Furthermore, we found that the trachea extension in both retina and TNUFM are dependent on the FGF(Bnl)/FGFR(Btl) signaling. Our results also provided strong evidence that the photoreceptors are the source of the Bnl ligand to guide the trachea ingrowth. Our work is the first systematic study of the tracheal development in the visual system, and also the first study demonstrating the interactions of two well-studied systems: the eye and trachea. PMID:23991208
The locus of impairment in English developmental letter position dyslexia
Kezilas, Yvette; Kohnen, Saskia; McKague, Meredith; Castles, Anne
2014-01-01
Many children with reading difficulties display phonological deficits and struggle to acquire non-lexical reading skills. However, not all children with reading difficulties have these problems, such as children with selective letter position dyslexia (LPD), who make excessive migration errors (such as reading slime as “smile”). Previous research has explored three possible loci for the deficit – the phonological output buffer, the orthographic input lexicon, and the orthographic-visual analysis stage of reading. While there is compelling evidence against a phonological output buffer and orthographic input lexicon deficit account of English LPD, the evidence in support of an orthographic-visual analysis deficit is currently limited. In this multiple single-case study with three English-speaking children with developmental LPD, we aimed to both replicate and extend previous findings regarding the locus of impairment in English LPD. First, we ruled out a phonological output buffer and an orthographic input lexicon deficit by administering tasks that directly assess phonological processing and lexical guessing. We then went on to directly assess whether or not children with LPD have an orthographic-visual analysis deficit by modifying two tasks that have previously been used to localize processing at this level: a same-different decision task and a non-word reading task. The results from these tasks indicate that LPD is most likely caused by a deficit specific to the coding of letter positions at the orthographic-visual analysis stage of reading. These findings provide further evidence for the heterogeneity of dyslexia and its underlying causes. PMID:24917802
Visualization of flow by vector analysis of multidirectional cine MR velocity mapping.
Mohiaddin, R H; Yang, G Z; Kilner, P J
1994-01-01
We describe a noninvasive method for visualization of flow and demonstrate its application in a flow phantom and in the great vessels of healthy volunteers and patients with aortic and pulmonary arterial disease. The technique uses multidirectional MR velocity mapping acquired in selected planes. Maps of orthogonal velocity components were then processed into a graphic form immediately recognizable as flow. Cine MR velocity maps of orthogonal velocity components in selected planes were acquired in a flow phantom, 10 healthy volunteers, and 13 patients with dilated great vessels. Velocities were presented by multiple computer-generated streaks whose orientation, length, and movement corresponded to velocity vectors in the chosen plane. The velocity vector maps allowed visualization of complex patterns of primary and secondary flow in the thoracic aorta and pulmonary arteries. The technique revealed coherent, helical forward blood movements in the normal thoracic aorta during midsystole and a reverse flow during early diastole. Abnormal flow patterns with secondary vortices were seen in patients with dilated arteries. The potential of MR velocity vector mapping for in vitro and in vivo visualization of flow patterns is demonstrated. Although this study was limited to two-directional flow in a single anatomical plane, the method provides information that might advance our understanding of the human vascular system in health and disease. Further developments to reduce the acquisition time and the handling and presenting of three-directional velocity data are required to enhance the capability of this method.
Visual scan-path analysis with feature space transient fixation moments
NASA Astrophysics Data System (ADS)
Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong
2003-05-01
The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.
Differential effects of delay upon visually and haptically guided grasping and perceptual judgments.
Pettypiece, Charles E; Culham, Jody C; Goodale, Melvyn A
2009-05-01
Experiments with visual illusions have revealed a dissociation between the systems that mediate object perception and those responsible for object-directed action. More recently, an experiment on a haptic version of the visual size-contrast illusion has provided evidence for the notion that the haptic modality shows a similar dissociation when grasping and estimating the size of objects in real-time. Here we present evidence suggesting that the similarities between the two modalities begin to break down once a delay is introduced between when people feel the target object and when they perform the grasp or estimation. In particular, when grasping after a delay in a haptic paradigm, people scale their grasps differently when the target is presented with a flanking object of a different size (although the difference does not reflect a size-contrast effect). When estimating after a delay, however, it appears that people ignore the size of the flanking objects entirely. This does not fit well with the results commonly found in visual experiments. Thus, introducing a delay reveals important differences in the way in which haptic and visual memories are stored and accessed.
Kermani, Mojtaba; Verghese, Ashika; Vidyasagar, Trichur R
2018-02-01
A major controversy regarding dyslexia is whether any of the many visual and phonological deficits found to be correlated with reading difficulty cause the impairment or result from the reduced amount of reading done by dyslexics. We studied this question by comparing a visual capacity in the left and right visual hemifields in people habitually reading scripts written right-to-left or left-to-right. Selective visual attention is necessary for efficient visual search and also for the sequential recognition of letters in words. Because such attentional allocation during reading depends on the direction in which one is reading, asymmetries in search efficiency may reflect biases arising from the habitual direction of reading. We studied this by examining search performance in three cohorts: (a) left-to-right readers who read English fluently; (b) right-to-left readers fluent in reading Farsi but not any left-to-right script; and (c) bilingual readers fluent in English and in Farsi, Arabic, or Hebrew. Left-to-right readers showed better search performance in the right hemifield and right-to-left readers in the left hemifield, but bilingual readers showed no such asymmetries. Thus, reading experience biases search performance in the direction of reading, which has implications for the cause and effect relationships between reading and cognitive functions. Copyright © 2017 John Wiley & Sons, Ltd.
Visualization and mechanisms of splashing erosion of electrodes in a DC air arc
NASA Astrophysics Data System (ADS)
Wu, Yi; Cui, Yufei; Rong, Mingzhe; Murphy, Anthony B.; Yang, Fei; Sun, Hao; Niu, Chunping; Fan, Shaodi
2017-11-01
The splashing erosion of electrodes in a DC atmospheric-pressure air arc has been investigated by visualization of the electrode surface and the sputtered droplets, and tracking of the droplet trajectories, using image processing techniques. A particle tracking velocimetry algorithm has been introduced to measure the sputtering velocity distribution. Erosion of both tungsten-copper and tungsten-ceria electrodes is studied; in both cases electrode erosion is found to be dominated by droplet splashing rather than metal evaporation. Erosion is directly influenced by both melting and the formation of plasma jets, and can be reduced by the tuning of the plasma jet and electrode material. The results provide an understanding of the mechanisms that lead to the long lifetime of tungsten-copper electrodes, and may provide a path for the design of the electrode system subjected to electric arc to minimize erosion.
ePMV embeds molecular modeling into professional animation software environments.
Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J
2011-03-09
Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.
ePMV Embeds Molecular Modeling into Professional Animation Software Environments
Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.
2011-01-01
SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181
Avoiding Focus Shifts in Surgical Telementoring Using an Augmented Reality Transparent Display.
Andersen, Daniel; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Gomez, Gerardo; Marley, Sherri; Mullis, Brian; Wachs, Juan
2016-01-01
Conventional surgical telementoring systems require the trainee to shift focus away from the operating field to a nearby monitor to receive mentor guidance. This paper presents the next generation of telementoring systems. Our system, STAR (System for Telementoring with Augmented Reality) avoids focus shifts by placing mentor annotations directly into the trainee's field of view using augmented reality transparent display technology. This prototype was tested with pre-medical and medical students. Experiments were conducted where participants were asked to identify precise operating field locations communicated to them using either STAR or a conventional telementoring system. STAR was shown to improve accuracy and to reduce focus shifts. The initial STAR prototype only provides an approximate transparent display effect, without visual continuity between the display and the surrounding area. The current version of our transparent display provides visual continuity by showing the geometry and color of the operating field from the trainee's viewpoint.
Milburn, Evelyn; Warren, Tessa; Dickey, Michael Walsh
There has been considerable debate regarding the question of whether linguistic knowledge and world knowledge are separable and used differently during processing or not (Hagoort, Hald, Bastiaansen, & Petersson, 2004; Matsuki et al., 2011; Paczynski & Kuperberg, 2012; Warren & McConnell, 2007; Warren, McConnell, & Rayner, 2008). Previous investigations into this question have provided mixed evidence as to whether violations of selectional restrictions are detected earlier than violations of world knowledge. We report a visual-world eye-tracking study comparing the timing of facilitation contributed by selectional restrictions versus world knowledge. College-aged adults (n=36) viewed photographs of natural scenes while listening to sentences. Participants anticipated upcoming direct objects similarly regardless of whether facilitation was provided by only world knowledge or a combination of selectional restrictions and world knowledge. These results suggest that selectional restrictions are not available earlier in comprehension than world knowledge.
Direction discriminating hearing aid system
NASA Technical Reports Server (NTRS)
Jhabvala, M.; Lin, H. C.; Ward, G.
1991-01-01
A visual display was developed for people with substantial hearing loss in either one or both ears. The system consists of three discreet units; an eyeglass assembly for the visual display of the origin or direction of sounds; a stationary general purpose noise alarm; and a noise seeker wand.
GOGrapher: A Python library for GO graph representation and analysis.
Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua
2009-07-07
The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve.
Takeda, Kenta; Mani, Hiroki; Hasegawa, Naoya; Sato, Yuki; Tanaka, Shintaro; Maejima, Hiroshi; Asaka, Tadayoshi
2017-07-19
The benefit of visual feedback of the center of pressure (COP) on quiet standing is still debatable. This study aimed to investigate the adaptation effects of visual feedback training using both the COP and center of gravity (COG) during quiet standing. Thirty-four healthy young adults were divided into three groups randomly (COP + COG, COP, and control groups). A force plate was used to calculate the coordinates of the COP in the anteroposterior (COP AP ) and mediolateral (COP ML ) directions. A motion analysis system was used to calculate the coordinates of the center of mass (COM) in both directions (COM AP and COM ML ). The coordinates of the COG in the AP direction (COG AP ) were obtained from the force plate signals. Augmented visual feedback was presented on a screen in the form of fluctuation circles in the vertical direction that moved upward as the COP AP and/or COG AP moved forward and vice versa. The COP + COG group received the real-time COP AP and COG AP feedback simultaneously, whereas the COP group received the real-time COP AP feedback only. The control group received no visual feedback. In the training session, the COP + COG group was required to maintain an even distance between the COP AP and COG AP and reduce the COG AP fluctuation, whereas the COP group was required to reduce the COP AP fluctuation while standing on a foam pad. In test sessions, participants were instructed to keep their standing posture as quiet as possible on the foam pad before (pre-session) and after (post-session) the training sessions. In the post-session, the velocity and root mean square of COM AP in the COP + COG group were lower than those in the control group. In addition, the absolute value of the sum of the COP - COM distances in the COP + COG group was lower than that in the COP group. Furthermore, positive correlations were found between the COM AP velocity and COP - COM parameters. The results suggest that the novel visual feedback training that incorporates the COP AP -COG AP interaction reduces postural sway better than the training using the COP AP alone during quiet standing. That is, even COP AP fluctuation around the COG AP would be effective in reducing the COM AP velocity.
ERIC Educational Resources Information Center
Pippert, Timothy D.; Essenburg, Laura J.; Matchett, Edward J.
2013-01-01
Colleges and universities have expanded their use of the internet and social media in marketing strategies, but the direct mailing of admissions brochures continues to be at the heart of recruitment efforts. Because admissions brochures often serve as a potential student's introduction to the campus, they are carefully crafted to provide a…
Elevator Illusion and Gaze Direction in Hypergravity
NASA Technical Reports Server (NTRS)
Cohen, Malcolm M.; Hargens, Alan (Technical Monitor)
1995-01-01
A luminous visual target in a dark hypergravity (Gz greater than 1) environment appears to be elevated above its true physical position. This "elevator illusion" has been attributed to changes in oculomotor control caused by increased stimulation of the otolith organs. Data relating the magnitude of the illusion to the magnitude of the changes in oculomotor control have been lacking. The present study provides such data.
Attention stabilizes the shared gain of V4 populations
Rabinowitz, Neil C; Goris, Robbe L; Cohen, Marlene; Simoncelli, Eero P
2015-01-01
Responses of sensory neurons represent stimulus information, but are also influenced by internal state. For example, when monkeys direct their attention to a visual stimulus, the response gain of specific subsets of neurons in visual cortex changes. Here, we develop a functional model of population activity to investigate the structure of this effect. We fit the model to the spiking activity of bilateral neural populations in area V4, recorded while the animal performed a stimulus discrimination task under spatial attention. The model reveals four separate time-varying shared modulatory signals, the dominant two of which each target task-relevant neurons in one hemisphere. In attention-directed conditions, the associated shared modulatory signal decreases in variance. This finding provides an interpretable and parsimonious explanation for previous observations that attention reduces variability and noise correlations of sensory neurons. Finally, the recovered modulatory signals reflect previous reward, and are predictive of subsequent choice behavior. DOI: http://dx.doi.org/10.7554/eLife.08998.001 PMID:26523390
Deficit in visual temporal integration in autism spectrum disorders.
Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru
2010-04-07
Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.
Lin, Chih-Yung; Chuang, Chao-Chun; Hua, Tzu-En; Chen, Chun-Chao; Dickson, Barry J; Greenspan, Ralph J; Chiang, Ann-Shyn
2013-05-30
How the brain perceives sensory information and generates meaningful behavior depends critically on its underlying circuitry. The protocerebral bridge (PB) is a major part of the insect central complex (CX), a premotor center that may be analogous to the human basal ganglia. Here, by deconstructing hundreds of PB single neurons and reconstructing them into a common three-dimensional framework, we have constructed a comprehensive map of PB circuits with labeled polarity and predicted directions of information flow. Our analysis reveals a highly ordered information processing system that involves directed information flow among CX subunits through 194 distinct PB neuron types. Circuitry properties such as mirroring, convergence, divergence, tiling, reverberation, and parallel signal propagation were observed; their functional and evolutional significance is discussed. This layout of PB neuronal circuitry may provide guidelines for further investigations on transformation of sensory (e.g., visual) input into locomotor commands in fly brains. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
NASA's Lunar and Planetary Mapping and Modeling Program
NASA Astrophysics Data System (ADS)
Law, E.; Day, B. H.; Kim, R. M.; Bui, B.; Malhotra, S.; Chang, G.; Sadaqathullah, S.; Arevalo, E.; Vu, Q. A.
2016-12-01
NASA's Lunar and Planetary Mapping and Modeling Program produces a suite of online visualization and analysis tools. Originally designed for mission planning and science, these portals offer great benefits for education and public outreach (EPO), providing access to data from a wide range of instruments aboard a variety of past and current missions. As a component of NASA's Science EPO Infrastructure, they are available as resources for NASA STEM EPO programs, and to the greater EPO community. As new missions are planned to a variety of planetary bodies, these tools are facilitating the public's understanding of the missions and engaging the public in the process of identifying and selecting where these missions will land. There are currently three web portals in the program: the Lunar Mapping and Modeling Portal or LMMP (http://lmmp.nasa.gov), Vesta Trek (http://vestatrek.jpl.nasa.gov), and Mars Trek (http://marstrek.jpl.nasa.gov). Portals for additional planetary bodies are planned. As web-based toolsets, the portals do not require users to purchase or install any software beyond current web browsers. The portals provide analysis tools for measurement and study of planetary terrain. They allow data to be layered and adjusted to optimize visualization. Visualizations are easily stored and shared. The portals provide 3D visualization and give users the ability to mark terrain for generation of STL files that can be directed to 3D printers. Such 3D prints are valuable tools in museums, public exhibits, and classrooms - especially for the visually impaired. Along with the web portals, the program supports additional clients, web services, and APIs that facilitate dissemination of planetary data to a range of external applications and venues. NASA challenges and hackathons are also providing members of the software development community opportunities to participate in tool development and leverage data from the portals.
Dementia alters standing postural adaptation during a visual search task in older adult men.
Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G
2015-04-23
This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Direct Imaging of Exciton Transport in Tubular Porphyrin Aggregates by Ultrafast Microscopy.
Wan, Yan; Stradomska, Anna; Knoester, Jasper; Huang, Libai
2017-05-31
Long-range exciton transport is a key challenge in achieving efficient solar energy harvesting in both organic solar cells and photosynthetic systems. Self-assembled molecular aggregates provide the potential for attaining long-range exciton transport through strong intermolecular coupling. However, there currently lacks an experimental tool to directly characterize exciton transport in space and in time to elucidate mechanisms. Here we report a direct visualization of exciton diffusion in tubular molecular aggregates by transient absorption microscopy with ∼200 fs time resolution and ∼50 nm spatial precision. These direct measurements provide exciton diffusion constants of 3-6 cm 2 s -1 for the tubular molecular aggregates, which are 3-5 times higher than a theoretical lower bound obtained by assuming incoherent hopping. These results suggest that coherent effects play a role, despite the fact that exciton states near the band bottom crucial for transport are only weakly delocalized (over <10 molecules). The methods presented here establish a direct approach for unraveling the mechanisms and main parameters underlying exciton transport in large molecular assemblies.
Cooper, Emily A.; Norcia, Anthony M.
2015-01-01
The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624
Visual attention for a desktop virtual environment with ambient scent
Toet, Alexander; van Schaik, Martin G.
2013-01-01
In the current study participants explored a desktop virtual environment (VE) representing a suburban neighborhood with signs of public disorder (neglect, vandalism, and crime), while being exposed to either room air (control group), or subliminal levels of tar (unpleasant; typically associated with burned or waste material) or freshly cut grass (pleasant; typically associated with natural or fresh material) ambient odor. They reported all signs of disorder they noticed during their walk together with their associated emotional response. Based on recent evidence that odors reflexively direct visual attention to (either semantically or affectively) congruent visual objects, we hypothesized that participants would notice more signs of disorder in the presence of ambient tar odor (since this odor may bias attention to unpleasant and negative features), and less signs of disorder in the presence of ambient grass odor (since this odor may bias visual attention toward the vegetation in the environment and away from the signs of disorder). Contrary to our expectations the results provide no indication that the presence of an ambient odor affected the participants’ visual attention for signs of disorder or their emotional response. However, the paradigm used in present study does not allow us to draw any conclusions in this respect. We conclude that a closer affective, semantic, or spatiotemporal link between the contents of a desktop VE and ambient scents may be required to effectively establish diagnostic associations that guide a user’s attention. In the absence of these direct links, ambient scent may be more diagnostic for the physical environment of the observer as a whole than for the particular items in that environment (or, in this case, items represented in the VE). PMID:24324453
Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C
2018-05-09
Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.
Solnik, Stanislaw; Qiao, Mu; Latash, Mark L.
2017-01-01
This study tested two hypotheses on the nature of unintentional force drifts elicited by removing visual feedback during accurate force production tasks. The role of working memory (memory hypothesis) was explored in tasks with continuous force production, intermittent force production, and rest intervals over the same time interval. The assumption of unintentional drifts in referent coordinate for the fingertips was tested using manipulations of visual feedback: Young healthy subjects performed accurate steady-state force production tasks by pressing with the two index fingers on individual force sensors with visual feedback on the total force, sharing ratio, both, or none. Predictions based on the memory hypothesis have been falsified. In particular, we observed consistent force drifts to lower force values during continuous force production trials only. No force drift or drifts to higher forces were observed during intermittent force production trials and following rest intervals. The hypotheses based on the idea of drifts in referent finger coordinates have been confirmed. In particular, we observed superposition of two drift processes: A drift of total force to lower magnitudes and a drift of the sharing ratio to 50:50. When visual feedback on total force only was provided, the two finger forces showed drifts in opposite directions. We interpret the findings as evidence for the control of motor actions with changes in referent coordinates for participating effectors. Unintentional drifts in performance are viewed as natural relaxation processes in the involved systems; their typical time reflects stability in the direction of the drift. The magnitude of the drift was higher in the right (dominant) hand, which is consistent with the dynamic dominance hypothesis. PMID:28168396
Visual short-term memory load reduces retinotopic cortex response to contrast.
Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli
2012-11-01
Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.
An algorithm for automatic reduction of complex signal flow graphs
NASA Technical Reports Server (NTRS)
Young, K. R.; Hoberock, L. L.; Thompson, J. G.
1976-01-01
A computer algorithm is developed that provides efficient means to compute transmittances directly from a signal flow graph or a block diagram. Signal flow graphs are cast as directed graphs described by adjacency matrices. Nonsearch computation, designed for compilers without symbolic capability, is used to identify all arcs that are members of simple cycles for use with Mason's gain formula. The routine does not require the visual acumen of an interpreter to reduce the topology of the graph, and it is particularly useful for analyzing control systems described for computer analyses by means of interactive graphics.
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
Attentional Processes in Young Children with Congenital Visual Impairment
ERIC Educational Resources Information Center
Tadic, Valerie; Pring, Linda; Dale, Naomi
2009-01-01
The study investigated attentional processes of 32 preschool children with congenital visual impairment (VI). Children with profound visual impairment (PVI) and severe visual impairment (SVI) were compared to a group of typically developing sighted children in their ability to respond to adult directed attention in terms of establishing,…
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
Six Myths about Spatial Thinking
ERIC Educational Resources Information Center
Newcombe, Nora S.; Stieff, Mike
2012-01-01
Visualizations are an increasingly important part of scientific education and discovery. However, users often do not gain knowledge from them in a complete or efficient way. This article aims to direct research on visualizations in science education in productive directions by reviewing the evidence for widespread assumptions that learning styles,…
A conditioned visual orientation requires the ellipsoid body in Drosophila
Guo, Chao; Du, Yifei; Yuan, Deliang; Li, Meixia; Gong, Haiyun; Gong, Zhefeng
2015-01-01
Orientation, the spatial organization of animal behavior, is an essential faculty of animals. Bacteria and lower animals such as insects exhibit taxis, innate orientation behavior, directly toward or away from a directional cue. Organisms can also orient themselves at a specific angle relative to the cues. In this study, using Drosophila as a model system, we established a visual orientation conditioning paradigm based on a flight simulator in which a stationary flying fly could control the rotation of a visual object. By coupling aversive heat shocks to a fly's orientation toward one side of the visual object, we found that the fly could be conditioned to orientate toward the left or right side of the frontal visual object and retain this conditioned visual orientation. The lower and upper visual fields have different roles in conditioned visual orientation. Transfer experiments showed that conditioned visual orientation could generalize between visual targets of different sizes, compactness, or vertical positions, but not of contour orientation. Rut—Type I adenylyl cyclase and Dnc—phosphodiesterase were dispensable for visual orientation conditioning. Normal activity and scb signaling in R3/R4d neurons of the ellipsoid body were required for visual orientation conditioning. Our studies established a visual orientation conditioning paradigm and examined the behavioral properties and neural circuitry of visual orientation, an important component of the insect's spatial navigation. PMID:25512578
Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.
Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R
2011-04-01
Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.
Spiegel, Daniel P.; Hansen, Bruce C.; Byblow, Winston D.; Thompson, Benjamin
2012-01-01
Transcranial direct current stimulation (tDCS) is a safe, non-invasive technique for transiently modulating the balance of excitation and inhibition within the human brain. It has been reported that anodal tDCS can reduce both GABA mediated inhibition and GABA concentration within the human motor cortex. As GABA mediated inhibition is thought to be a key modulator of plasticity within the adult brain, these findings have broad implications for the future use of tDCS. It is important, therefore, to establish whether tDCS can exert similar effects within non-motor brain areas. The aim of this study was to assess whether anodal tDCS could reduce inhibitory interactions within the human visual cortex. Psychophysical measures of surround suppression were used as an index of inhibition within V1. Overlay suppression, which is thought to originate within the lateral geniculate nucleus (LGN), was also measured as a control. Anodal stimulation of the occipital poles significantly reduced psychophysical surround suppression, but had no effect on overlay suppression. This effect was specific to anodal stimulation as cathodal stimulation had no effect on either measure. These psychophysical results provide the first evidence for tDCS-induced reductions of intracortical inhibition within the human visual cortex. PMID:22563485
The Complex Structure of Receptive Fields in the Middle Temporal Area
Richert, Micah; Albright, Thomas D.; Krekelberg, Bart
2012-01-01
Neurons in the middle temporal area (MT) are often viewed as motion detectors that prefer a single direction of motion in a single region of space. This assumption plays an important role in our understanding of visual processing, and models of motion processing in particular. We used extracellular recordings in area MT of awake, behaving monkeys (M. mulatta) to test this assumption with a novel reverse correlation approach. Nearly half of the MT neurons in our sample deviated significantly from the classical view. First, in many cells, direction preference changed with the location of the stimulus within the receptive field. Second, the spatial response profile often had multiple peaks with apparent gaps in between. This shows that visual motion analysis in MT has access to motion detectors that are more complex than commonly thought. This complexity could be a mere byproduct of imperfect development, but can also be understood as the natural consequence of the non-linear, recurrent interactions among laterally connected MT neurons. An important direction for future research is to investigate whether these in homogeneities are advantageous, how they can be incorporated into models of motion detection, and whether they can provide quantitative insight into the underlying effective connectivity. PMID:23508640
Usukura, Eiji; Narita, Akihiro; Yagi, Akira; Ito, Shuichi; Usukura, Jiro
2016-01-01
An improved unroofing method enabled the cantilever of an atomic force microscope (AFM) to reach directly into a cell to visualize the intracellular cytoskeletal actin filaments, microtubules, clathrin coats, and caveolae in phosphate-buffered saline (PBS) at a higher resolution than conventional electron microscopy. All of the actin filaments clearly exhibited a short periodicity of approximately 5–6 nm, which was derived from globular actins linked to each other to form filaments, as well as a long helical periodicity. The polarity of the actin filaments appeared to be determined by the shape of the periodic striations. Microtubules were identified based on their thickness. Clathrin coats and caveolae were observed on the cytoplasmic surface of cell membranes. The area containing clathrin molecules and their terminal domains was directly visualized. Characteristic ridge structures located at the surface of the caveolae were observed at high resolution, similar to those observed with electron microscopy (EM). Overall, unroofing allowed intracellular AFM imaging in a liquid environment with a level of quality equivalent or superior to that of EM. Thus, AFMs are anticipated to provide cutting-edge findings in cell biology and histology. PMID:27273367
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-01
Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus
NASA Astrophysics Data System (ADS)
Deng, Zishan; Gao, Yuan; Li, Ting
2018-02-01
As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.
[Review of visual display system in flight simulator].
Xie, Guang-hui; Wei, Shao-ning
2003-06-01
Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed.
Route Network Construction with Location-Direction-Enabled Photographs
NASA Astrophysics Data System (ADS)
Fujita, Hideyuki; Sagara, Shota; Ohmori, Tadashi; Shintani, Takahiko
2018-05-01
We propose a method for constructing a geometric graph for generating routes that summarize a geographical area and also have visual continuity by using a set of location-direction-enabled photographs. A location- direction-enabled photograph is a photograph that has information about the location (position of the camera at the time of shooting) and the direction (direction of the camera at the time of shooting). Each nodes of the graph corresponds to a location-direction-enabled photograph. The location of each node is the location of the corresponding photograph, and a route on the graph corresponds to a route in the geographic area and a sequence of photographs. The proposed graph is constructed to represent characteristic spots and paths linking the spots, and it is assumed to be a kind of a spatial summarization of the area with the photographs. Therefore, we call the routes on the graph as spatial summary route. Each route on the proposed graph also has a visual continuity, which means that we can understand the spatial relationship among the continuous photographs on the route such as moving forward, backward, turning right, etc. In this study, when the changes in the shooting position and shooting direction satisfied a given threshold, the route was defined to have visual continuity. By presenting the photographs in order along the generated route, information can be presented sequentially, while maintaining visual continuity to a great extent.
Direct visualization of hemolymph flow in the heart of a grasshopper (Schistocerca americana)
Lee, Wah-Keat; Socha, John J
2009-01-01
Background Hemolymph flow patterns in opaque insects have never been directly visualized due to the lack of an appropriate imaging technique. The required spatial and temporal resolutions, together with the lack of contrast between the hemolymph and the surrounding soft tissue, are major challenges. Previously, indirect techniques have been used to infer insect heart motion and hemolymph flow, but such methods fail to reveal fine-scale kinematics of heartbeat and details of intra-heart flow patterns. Results With the use of microbubbles as high contrast tracer particles, we directly visualized hemolymph flow in a grasshopper (Schistocerca americana) using synchrotron x-ray phase-contrast imaging. In-vivo intra-heart flow patterns and the relationship between respiratory (tracheae and air sacs) and circulatory (heart) systems were directly observed for the first time. Conclusion Synchrotron x-ray phase contrast imaging is the only generally applicable technique that has the necessary spatial, temporal resolutions and sensitivity to directly visualize heart dynamics and flow patterns inside opaque animals. This technique has the potential to illuminate many long-standing questions regarding small animal circulation, encompassing topics such as retrograde heart flow in some insects and the development of flow in embryonic vertebrates. PMID:19272159
Ultrasound-directed robotic system for thermal ablation of liver tumors: a preliminary report
NASA Astrophysics Data System (ADS)
Zheng, Jian; Tian, Jie; Dai, Yakang; Zhang, Xing; Dong, Di; Xu, Min
2010-03-01
Thermal ablation has been proved safe and effective as the treatment for liver tumors that are not suitable for resection. Currently, manually performed thermal ablation is greatly dependent on the surgeon's acupuncture manipulation against hand tremor. Besides that, inaccurate or inappropriate placement of the applicator will also directly decrease the final treatment effect. In order to reduce the influence of hand tremor, and provide an accurate and appropriate guidance for a better treatment, we develop an ultrasound-directed robotic system for thermal ablation of liver tumors. In this paper, we will give a brief preliminary report of our system. Especially, three innovative techniques are proposed to solve the critical problems in our system: accurate ultrasound calibration when met with artifacts, realtime reconstruction with visualization using Graphic Processing Unit (GPU) acceleration and 2D-3D ultrasound image registration. To reduce the error of point extraction with artifacts, we propose a novel point extraction method by minimizing an error function which is defined based on the geometric property of our N-fiducial phantom. Then realtime reconstruction with visualization using GPU acceleration is provided for fast 3D ultrasound volume acquisition with dynamic display of reconstruction progress. After that, coarse 2D-3D ultrasound image registration is performed based on landmark points correspondences, followed by accurate 2D-3D ultrasound image registration based on Euclidean distance transform (EDT). The effectiveness of our proposed techniques is demonstrated in phantom experiments.
Semantic Enrichment of Movement Behavior with Foursquare--A Visual Analytics Approach.
Krueger, Robert; Thom, Dennis; Ertl, Thomas
2015-08-01
In recent years, many approaches have been developed that efficiently and effectively visualize movement data, e.g., by providing suitable aggregation strategies to reduce visual clutter. Analysts can use them to identify distinct movement patterns, such as trajectories with similar direction, form, length, and speed. However, less effort has been spent on finding the semantics behind movements, i.e. why somebody or something is moving. This can be of great value for different applications, such as product usage and consumer analysis, to better understand urban dynamics, and to improve situational awareness. Unfortunately, semantic information often gets lost when data is recorded. Thus, we suggest to enrich trajectory data with POI information using social media services and show how semantic insights can be gained. Furthermore, we show how to handle semantic uncertainties in time and space, which result from noisy, unprecise, and missing data, by introducing a POI decision model in combination with highly interactive visualizations. Finally, we evaluate our approach with two case studies on a large electric scooter data set and test our model on data with known ground truth.
Weimer, Jill M.; Custer, Andrew W.; Benedict, Jared W.; Alexander, Noreen A.; Kingsley, Evan; Federoff, Howard J.; Cooper, Jonathan D.; Pearce, David A.
2013-01-01
Juvenile neuronal ceroid lipofuscinosis (JNCL) is an autosomal recessive disorder of childhood caused by mutations in CLN3. Although visual deterioration is typically the first clinical sign to manifest in affected children, loss of Cln3 in a mouse model of JNCL does not recapitulate this retinal deterioration. This suggests that either the loss of CLN3 does not directly affect retinal cell survival or that nuclei involved in visual processing are affected prior to retinal degeneration. Having previously demonstrated that Cln3−/− mice have decreased optic nerve axonal density, we now demonstrate a decrease in nerve conduction. Examination of retino-recipient regions revealed a decreased number of neurons within the dorsal lateral geniculate nucleus (LGNd). We demonstrate decreased transport of amino acids from the retina to the LGN, suggesting an impediment in communication between the retina and projection nuclei. This study defines a novel path of degeneration within the LGNd, providing a mechanism for causation of JNCL visual deficits. PMID:16412658
Route visualization using detail lenses.
Karnick, Pushpak; Cline, David; Jeschke, Stefan; Razdan, Anshuman; Wonka, Peter
2010-01-01
We present a method designed to address some limitations of typical route map displays of driving directions. The main goal of our system is to generate a printable version of a route map that shows the overview and detail views of the route within a single, consistent visual frame. Our proposed visualization provides a more intuitive spatial context than a simple list of turns. We present a novel multifocus technique to achieve this goal, where the foci are defined by points of interest (POI) along the route. A detail lens that encapsulates the POI at a finer geospatial scale is created for each focus. The lenses are laid out on the map to avoid occlusion with the route and each other, and to optimally utilize the free space around the route. We define a set of layout metrics to evaluate the quality of a lens layout for a given route map visualization. We compare standard lens layout methods to our proposed method and demonstrate the effectiveness of our method in generating aesthetically pleasing layouts. Finally, we perform a user study to evaluate the effectiveness of our layout choices.
The trait of sensory processing sensitivity and neural responses to changes in visual scenes
Xu, Xiaomeng; Aron, Arthur; Aron, Elaine; Cao, Guikang; Feng, Tingyong; Weng, Xuchu
2011-01-01
This exploratory study examined the extent to which individual differences in sensory processing sensitivity (SPS), a temperament/personality trait characterized by social, emotional and physical sensitivity, are associated with neural response in visual areas in response to subtle changes in visual scenes. Sixteen participants completed the Highly Sensitive Person questionnaire, a standard measure of SPS. Subsequently, they were tested on a change detection task while undergoing functional magnetic resonance imaging (fMRI). SPS was associated with significantly greater activation in brain areas involved in high-order visual processing (i.e. right claustrum, left occipitotemporal, bilateral temporal and medial and posterior parietal regions) as well as in the right cerebellum, when detecting minor (vs major) changes in stimuli. These findings remained strong and significant after controlling for neuroticism and introversion, traits that are often correlated with SPS. These results provide the first evidence of neural differences associated with SPS, the first direct support for the sensory aspect of this trait that has been studied primarily for its social and affective implications, and preliminary evidence for heightened sensory processing in individuals high in SPS. PMID:20203139
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less
Heading Tuning in Macaque Area V6.
Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E
2015-12-16
Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.
Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik
2016-01-01
Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
Suplatov, Dmitry; Sharapova, Yana; Timonina, Daria; Kopylov, Kirill; Švedas, Vytas
2018-04-01
The visualCMAT web-server was designed to assist experimental research in the fields of protein/enzyme biochemistry, protein engineering, and drug discovery by providing an intuitive and easy-to-use interface to the analysis of correlated mutations/co-evolving residues. Sequence and structural information describing homologous proteins are used to predict correlated substitutions by the Mutual information-based CMAT approach, classify them into spatially close co-evolving pairs, which either form a direct physical contact or interact with the same ligand (e.g. a substrate or a crystallographic water molecule), and long-range correlations, annotate and rank binding sites on the protein surface by the presence of statistically significant co-evolving positions. The results of the visualCMAT are organized for a convenient visual analysis and can be downloaded to a local computer as a content-rich all-in-one PyMol session file with multiple layers of annotation corresponding to bioinformatic, statistical and structural analyses of the predicted co-evolution, or further studied online using the built-in interactive analysis tools. The online interactivity is implemented in HTML5 and therefore neither plugins nor Java are required. The visualCMAT web-server is integrated with the Mustguseal web-server capable of constructing large structure-guided sequence alignments of protein families and superfamilies using all available information about their structures and sequences in public databases. The visualCMAT web-server can be used to understand the relationship between structure and function in proteins, implemented at selecting hotspots and compensatory mutations for rational design and directed evolution experiments to produce novel enzymes with improved properties, and employed at studying the mechanism of selective ligand's binding and allosteric communication between topologically independent sites in protein structures. The web-server is freely available at https://biokinet.belozersky.msu.ru/visualcmat and there are no login requirements.
The neural basis of visual word form processing: a multivariate investigation.
Nestor, Adrian; Behrmann, Marlene; Plaut, David C
2013-07-01
Current research on the neurobiological bases of reading points to the privileged role of a ventral cortical network in visual word processing. However, the properties of this network and, in particular, its selectivity for orthographic stimuli such as words and pseudowords remain topics of significant debate. Here, we approached this issue from a novel perspective by applying pattern-based analyses to functional magnetic resonance imaging data. Specifically, we examined whether, where and how, orthographic stimuli elicit distinct patterns of activation in the human cortex. First, at the category level, multivariate mapping found extensive sensitivity throughout the ventral cortex for words relative to false-font strings. Secondly, at the identity level, the multi-voxel pattern classification provided direct evidence that different pseudowords are encoded by distinct neural patterns. Thirdly, a comparison of pseudoword and face identification revealed that both stimulus types exploit common neural resources within the ventral cortical network. These results provide novel evidence regarding the involvement of the left ventral cortex in orthographic stimulus processing and shed light on its selectivity and discriminability profile. In particular, our findings support the existence of sublexical orthographic representations within the left ventral cortex while arguing for the continuity of reading with other visual recognition skills.