NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Walker, R. J.; Ashour-Abdalla, M.
1995-01-01
We report on the development of an interactive system for visualizing and analyzing numerical simulation results. This system is based on visualization modules which use the Application Visualization System (AVS) and the NCAR graphics packages. Examples from recent simulations are presented to illustrate how these modules can be used for displaying and manipulating simulation results to facilitate their comparison with phenomenological model results and observations.
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Li, Guipeng; Li, Ming; Zhang, Yiwei; Wang, Dong; Li, Rong; Guimerà, Roger; Gao, Juntao Tony; Zhang, Michael Q
2014-01-01
Rapidly increasing amounts of (physical and genetic) protein-protein interaction (PPI) data are produced by various high-throughput techniques, and interpretation of these data remains a major challenge. In order to gain insight into the organization and structure of the resultant large complex networks formed by interacting molecules, using simulated annealing, a method based on the node connectivity, we developed ModuleRole, a user-friendly web server tool which finds modules in PPI network and defines the roles for every node, and produces files for visualization in Cytoscape and Pajek. For given proteins, it analyzes the PPI network from BioGRID database, finds and visualizes the modules these proteins form, and then defines the role every node plays in this network, based on two topological parameters Participation Coefficient and Z-score. This is the first program which provides interactive and very friendly interface for biologists to find and visualize modules and roles of proteins in PPI network. It can be tested online at the website http://www.bioinfo.org/modulerole/index.php, which is free and open to all users and there is no login requirement, with demo data provided by "User Guide" in the menu Help. Non-server application of this program is considered for high-throughput data with more than 200 nodes or user's own interaction datasets. Users are able to bookmark the web link to the result page and access at a later time. As an interactive and highly customizable application, ModuleRole requires no expert knowledge in graph theory on the user side and can be used in both Linux and Windows system, thus a very useful tool for biologist to analyze and visualize PPI networks from databases such as BioGRID. ModuleRole is implemented in Java and C, and is freely available at http://www.bioinfo.org/modulerole/index.php. Supplementary information (user guide, demo data) is also available at this website. API for ModuleRole used for this program can be obtained upon request.
A Novel Interhemispheric Interaction: Modulation of Neuronal Cooperativity in the Visual Areas
Carmeli, Cristian; Lopez-Aguado, Laura; Schmidt, Kerstin E.; De Feo, Oscar; Innocenti, Giorgio M.
2007-01-01
Background The cortical representation of the visual field is split along the vertical midline, with the left and the right hemi-fields projecting to separate hemispheres. Connections between the visual areas of the two hemispheres are abundant near the representation of the visual midline. It was suggested that they re-establish the functional continuity of the visual field by controlling the dynamics of the responses in the two hemispheres. Methods/Principal Findings To understand if and how the interactions between the two hemispheres participate in processing visual stimuli, the synchronization of responses to identical or different moving gratings in the two hemi-fields were studied in anesthetized ferrets. The responses were recorded by multiple electrodes in the primary visual areas and the synchronization of local field potentials across the electrodes were analyzed with a recent method derived from dynamical system theory. Inactivating the visual areas of one hemisphere modulated the synchronization of the stimulus-driven activity in the other hemisphere. The modulation was stimulus-specific and was consistent with the fine morphology of callosal axons in particular with the spatio-temporal pattern of activity that axonal geometry can generate. Conclusions/Significance These findings describe a new kind of interaction between the cerebral hemispheres and highlight the role of axonal geometry in modulating aspects of cortical dynamics responsible for stimulus detection and/or categorization. PMID:18074012
Towards a Comprehensive Computational Simulation System for Turbomachinery
NASA Technical Reports Server (NTRS)
Shih, Ming-Hsin
1994-01-01
The objective of this work is to develop algorithms associated with a comprehensive computational simulation system for turbomachinery flow fields. This development is accomplished in a modular fashion. These modules includes grid generation, visualization, network, simulation, toolbox, and flow modules. An interactive grid generation module is customized to facilitate the grid generation process associated with complicated turbomachinery configurations. With its user-friendly graphical user interface, the user may interactively manipulate the default settings to obtain a quality grid within a fraction of time that is usually required for building a grid about the same geometry with a general-purpose grid generation code. Non-Uniform Rational B-Spline formulations are utilized in the algorithm to maintain geometry fidelity while redistributing grid points on the solid surfaces. Bezier curve formulation is used to allow interactive construction of inner boundaries. It is also utilized to allow interactive point distribution. Cascade surfaces are transformed from three-dimensional surfaces of revolution into two-dimensional parametric planes for easy manipulation. Such a transformation allows these manipulated plane grids to be mapped to surfaces of revolution by any generatrix definition. A sophisticated visualization module is developed to al-low visualization for both grid and flow solution, steady or unsteady. A network module is built to allow data transferring in the heterogeneous environment. A flow module is integrated into this system, using an existing turbomachinery flow code. A simulation module is developed to combine the network, flow, and visualization module to achieve near real-time flow simulation about turbomachinery geometries. A toolbox module is developed to support the overall task. A batch version of the grid generation module is developed to allow portability and has been extended to allow dynamic grid generation for pitch changing turbomachinery configurations. Various applications with different characteristics are presented to demonstrate the success of this system.
NASA Astrophysics Data System (ADS)
Oliver, Joseph Steve; Hodges, Georgia W.; Moore, James N.; Cohen, Allan; Jang, Yoonsun; Brown, Scott A.; Kwon, Kyung A.; Jeong, Sophia; Raven, Sara P.; Jurkiewicz, Melissa; Robertson, Tom P.
2017-11-01
Research into the efficacy of modules featuring dynamic visualizations, case studies, and interactive learning environments is reported here. This quasi-experimental 2-year study examined the implementation of three interactive computer-based instructional modules within a curricular unit covering cellular biology concepts in an introductory high school biology course. The modules featured dynamic visualizations and focused on three processes that underlie much of cellular biology: diffusion, osmosis, and filtration. Pre-tests and post-tests were used to assess knowledge growth across the unit. A mixture Rasch model analysis of the post-test data revealed two groups of students. In both years of the study, a large proportion of the students were classified as low-achieving based on their pre-test scores. The use of the modules in the Cell Unit in year 2 was associated with a much larger proportion of the students having transitioned to the high-achieving group than in year 1. In year 2, the same teachers taught the same concepts as year 1 but incorporated the interactive computer-based modules into the cell biology unit of the curriculum. In year 2, 67% of students initially classified as low-achieving were classified as high-achieving at the end of the unit. Examination of responses to assessments embedded within the modules as well as post-test items linked transition to the high-achieving group with correct responses to items that both referenced the visualization and the contextualization of that visualization within the module. This study points to the importance of dynamic visualization within contextualized case studies as a means to support student knowledge acquisition in biology.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century
NASA Astrophysics Data System (ADS)
Bartsch, H.; Erlebacher, G.
2003-12-01
amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.
Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex
McMains, Stephanie; Kastner, Sabine
2011-01-01
Multiple stimuli present in the visual field at the same time compete for neural representation by mutually suppressing their evoked activity throughout visual cortex, providing a neural correlate for the limited processing capacity of the visual system. Competitive interactions among stimuli can be counteracted by top-down, goal-directed mechanisms such as attention, and by bottom-up, stimulus-driven mechanisms. Because these two processes cooperate in everyday life to bias processing toward behaviorally relevant or particularly salient stimuli, it has proven difficult to study interactions between top-down and bottom-up mechanisms. Here, we used an experimental paradigm in which we first isolated the effects of a bottom-up influence on neural competition by parametrically varying the degree of perceptual grouping in displays that were not attended. Second, we probed the effects of directed attention on the competitive interactions induced with the parametric design. We found that the amount of attentional modulation varied linearly with the degree of competition left unresolved by bottom-up processes, such that attentional modulation was greatest when neural competition was little influenced by bottom-up mechanisms and smallest when competition was strongly influenced by bottom-up mechanisms. These findings suggest that the strength of attentional modulation in the visual system is constrained by the degree to which competitive interactions have been resolved by bottom-up processes related to the segmentation of scenes into candidate objects. PMID:21228167
Top-down alpha oscillatory network interactions during visuospatial attention orienting.
Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M
2016-05-15
Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.
An Interactive Assessment Framework for Visual Engagement: Statistical Analysis of a TEDx Video
ERIC Educational Resources Information Center
Farhan, Muhammad; Aslam, Muhammad
2017-01-01
This study aims to assess the visual engagement of the video lectures. This analysis can be useful for the presenter and student to find out the overall visual attention of the videos. For this purpose, a new algorithm and data collection module are developed. Videos can be transformed into a dataset with the help of data collection module. The…
NASA Technical Reports Server (NTRS)
Saganti, P. B.; Zapp, E. N.; Wilson, J. W.; Cucinotta, F. A.
2001-01-01
The US Lab module of the International Space Station (ISS) is a primary working area where the crewmembers are expected to spend majority of their time. Because of the directionality of radiation fields caused by the Earth shadow, trapped radiation pitch angle distribution, and inherent variations in the ISS shielding, a model is needed to account for these local variations in the radiation distribution. We present the calculated radiation dose (rem/yr) values for over 3,000 different points in the working area of the Lab module and estimated radiation dose values for over 25,000 different points in the human body for a given ambient radiation environment. These estimated radiation dose values are presented in a three dimensional animated interactive visualization format. Such interactive animated visualization of the radiation distribution can be generated in near real-time to track changes in the radiation environment during the orbit precession of the ISS.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
USDA-ARS?s Scientific Manuscript database
Interactive modules for data exploration and visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data sets with a user-friendly interface. Individual modules were designed to provide toolsets to enable interactive ...
The evolution of meaning: spatio-temporal dynamics of visual object recognition.
Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K
2011-08-01
Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.
Design and implementation of a 3D ocean virtual reality and visualization engine
NASA Astrophysics Data System (ADS)
Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing
2012-12-01
In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.
Visual Processing: Hungry Like the Mouse.
Piscopo, Denise M; Niell, Cristopher M
2016-09-07
In this issue of Neuron, Burgess et al. (2016) explore how motivational state interacts with visual processing, by examining hunger modulation of food-associated visual responses in postrhinal cortical neurons and their inputs from amygdala. Copyright © 2016 Elsevier Inc. All rights reserved.
Effects of Web-Based Interactive Modules on Engineering Students' Learning Motivations
ERIC Educational Resources Information Center
Bai, Haiyan; Aman, Amjad; Xu, Yunjun; Orlovskaya, Nina; Zhou, Mingming
2016-01-01
The purpose of this study is to assess the impact of a newly developed modules, Interactive Web-Based Visualization Tools for Gluing Undergraduate Fuel Cell Systems Courses system (IGLU), on learning motivations of engineering students using two samples (n[subscript 1] = 144 and n[subscript 2] = 135) from senior engineering classes. The…
Visual adaptation dominates bimodal visual-motor action adaptation
de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.
2016-01-01
A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781
CyberMedVPS: visual programming for development of simulators.
Morais, Aline M; Machado, Liliane S
2011-01-01
Computer applications based on Virtual Reality (VR) has been outstanding in training and teaching in the medical filed due to their ability to simulate realistic in which users can practice skills and decision making in different situations. But was realized in these frameworks a hard interaction of non-programmers users. Based on this problematic will be shown the CyberMedVPS, a graphical module which implement Visual Programming concepts to solve an interaction trouble. Frameworks to develop such simulators are available but their use demands knowledge of programming. Based on this problematic will be shown the CyberMedVPS, a graphical module for the CyberMed framework, which implements Visual Programming concepts to allow the development of simulators by non-programmers professionals of the medical field.
PyPathway: Python Package for Biological Network Analysis and Visualization.
Xu, Yang; Luo, Xiao-Chun
2018-05-01
Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.
An Essay on Interactive Investigations of the Zeeman Effect in the Interstellar Medium
ERIC Educational Resources Information Center
Woolsey, Lauren
2015-01-01
The paper presents an interactive module created through the Wolfram Demonstrations Project that visualizes the Zeeman effect for the small magnetic field strengths present in the interstellar medium. The paper provides an overview of spectral lines and a few examples of strong and weak Zeeman splitting before discussing the module in depth.…
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
The Habitable Zone Gallery 2.0: The Online Exoplanet System Visualization Suite
NASA Astrophysics Data System (ADS)
Chandler, C. O.; Kane, S. R.; Gelino, D. M.
2017-11-01
The Habitable Zone Gallery 2.0 provides new and improved visualization and data analysis tools to the exoplanet habitability community and beyond. Modules include interactive habitable zone plotting and downloadable 3D animations.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel.
Grapov, Dmitry; Newman, John W
2012-09-01
Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010).
Wagatsuma, Nobuhiko; Sakai, Ko
2017-01-01
Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision. PMID:28163688
Wagatsuma, Nobuhiko; Sakai, Ko
2016-01-01
Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model
Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki
2013-01-01
Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628
Emotion and anxiety potentiate the way attention alters visual appearance.
Barbot, Antoine; Carrasco, Marisa
2018-04-12
The ability to swiftly detect and prioritize the processing of relevant information around us is critical for the way we interact with our environment. Selective attention is a key mechanism that serves this purpose, improving performance in numerous visual tasks. Reflexively attending to sudden information helps detect impeding threat or danger, a possible reason why emotion modulates the way selective attention affects perception. For instance, the sudden appearance of a fearful face potentiates the effects of exogenous (involuntary, stimulus-driven) attention on performance. Internal states such as trait anxiety can also modulate the impact of attention on early visual processing. However, attention does not only improve performance; it also alters the way visual information appears to us, e.g. by enhancing perceived contrast. Here we show that emotion potentiates the effects of exogenous attention on both performance and perceived contrast. Moreover, we found that trait anxiety mediates these effects, with stronger influences of attention and emotion in anxious observers. Finally, changes in performance and appearance correlated with each other, likely reflecting common attentional modulations. Altogether, our findings show that emotion and anxiety interact with selective attention to truly alter how we see.
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
ERIC Educational Resources Information Center
Poeylaut-Palena, Andres, A.; de los Angeles Laborde, Maria
2013-01-01
A learning module for molecular level analysis of protein structure and ligand/drug interaction through the visualization of X-ray diffraction is presented. Using DeepView as molecular model visualization software, students learn about the general concepts of protein structure. This Biochemistry classroom exercise is designed to be carried out by…
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction
Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick
2016-01-01
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
Grapov, Dmitry; Newman, John W.
2012-01-01
Summary: Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Availability and implementation: Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010). Contact: John.Newman@ars.usda.gov Supplementary Information: Installation instructions, tutorials and users manual are available at http://sourceforge.net/projects/imdev/. PMID:22815358
Attractive faces temporally modulate visual attention
Nakamura, Koyo; Kawabata, Hideaki
2014-01-01
Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994
US Army Research Laboratory Visualization Framework Design Document
2016-01-01
This section highlights each module in the ARL-VF and subsequent sections provide details on how each module interacts . Fig. 2 ARL-VF with the...ConfigAgent MultiTouch VizDatabase VizController TUIO VizDatabase User VizDaemon VizDaemon VizDaemon VizDaemon VizDaemon TestPoint...received by the destination. The sequence diagram in Fig. 4 shows this interaction . Approved for public release; distribution unlimited. 13 Fig. 4
Modulation of neuronal responses during covert search for visual feature conjunctions
Buracas, Giedrius T.; Albright, Thomas D.
2009-01-01
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385
Modulation of neuronal responses during covert search for visual feature conjunctions.
Buracas, Giedrius T; Albright, Thomas D
2009-09-29
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.
Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data
Bergeron, Bryan P.; Greenes, Robert A.
1987-01-01
Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.
Interactions between attention, context and learning in primary visual cortex.
Gilbert, C; Ito, M; Kapadia, M; Westheimer, G
2000-01-01
Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.
A distributed analysis and visualization system for model and observational data
NASA Technical Reports Server (NTRS)
Wilhelmson, Robert B.
1994-01-01
Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.
Prosody production networks are modulated by sensory cues and social context.
Klasen, Martin; von Marschall, Clara; Isman, Güldehen; Zvyagintsev, Mikhail; Gur, Ruben C; Mathiak, Klaus
2018-03-05
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled fMRI during prosodic communication in 30 participants. Emotional vocalizations were a) free, b) auditorily cued, c) visually cued, or d) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and - in case of visual stimuli - visual cortex. Responses were larger in pSTG at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language, and reward networks contributed to prosody production and were modulated by cues and social context. The right pSTG is a central hub for communication in social interactions - in particular for interpersonal evaluation of vocal emotions.
Cytoscape tools for the web age: D3.js and Cytoscape.js exporters
Ono, Keiichiro; Demchak, Barry; Ideker, Trey
2014-01-01
In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them. PMID:25520778
Cytoscape tools for the web age: D3.js and Cytoscape.js exporters.
Ono, Keiichiro; Demchak, Barry; Ideker, Trey
2014-01-01
In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them.
Design of Instrument Control Software for Solar Vector Magnetograph at Udaipur Solar Observatory
NASA Astrophysics Data System (ADS)
Gosain, Sanjay; Venkatakrishnan, P.; Venugopalan, K.
2004-04-01
A magnetograph is an instrument which makes measurement of solar magnetic field by measuring Zeeman induced polarization in solar spectral lines. In a typical filter based magnetograph there are three main modules namely, polarimeter, narrow-band spectrometer (filter), and imager(CCD camera). For a successful operation of magnetograph it is essential that these modules work in synchronization with each other. Here, we describe the design of instrument control system implemented for the Solar Vector Magnetograph under development at Udaipur Solar Observatory. The control software is written in Visual Basic and exploits the Component Object Model (COM) components for a fast and flexible application development. The user can interact with the instrument modules through a Graphical User Interface (GUI) and can program the sequence of magnetograph operations. The integration of Interactive Data Language (IDL) ActiveX components in the interface provides a powerful tool for online visualization, analysis and processing of images.
A Modern and Interactive Approach to Learning Laser and Optical Communications.
ERIC Educational Resources Information Center
Minasian, Robert; Alameh, Kamal
2002-01-01
Discusses challenges in teaching lasers and optical communications to engineers, including the prohibitive cost of laboratory experiments, and describes the development of a computer-based photonics simulation experiment module which provides students with an understanding and visualization of how lasers can be modulated in telecommunications.…
Kavadella, A; Kossioni, A E; Tsiklakis, K; Cowpe, J; Bullock, A; Barnes, E; Bailey, S; Thomas, H; Thomas, R; Karaharju-Suvanto, T; Suomalainen, K; Kersten, H; Povel, E; Giles, M; Walmsley, D; Soboleva, U; Liepa, A; Akota, I
2013-05-01
To provide evidence-based and peer-reviewed recommendations for the development of dental continuing professional development (CPD) learning e-modules. The present recommendations are consensus recommendations of the DentCPD project team and were informed by a literature research, consultations from e-learning and IT expert, discussions amongst the participants attending a special interest group during the 2012 ADEE meeting, and feedback from the evaluation procedures of the exemplar e-module (as described in a companion paper within this Supplement). The main focus of these recommendations is on the courses and modules organised and offered by dental schools. E-modules for dental CPD, as well as for other health professionals' continuing education, have been implemented and evaluated for a number of years. Research shows that the development of e-modules is a team process, undertaken by academics, subject experts, pedagogists, IT and web designers, learning technologists and librarians. The e-module must have clear learning objectives (outcomes), addressing the learners' individual needs, and must be visually attractive, relevant, interactive, promoting critical thinking and providing feedback. The text, graphics and animations must support the objectives and enable the learning process by creating an attractive, easy to navigate and interactive electronic environment. Technology is usually a concern for learners and tutors; therefore, it must be kept simple and interoperable within different systems and software. The pedagogical and technological proficiency of educators is of paramount importance, yet remains a challenge in many instances. The development of e-courses and modules for dental CPD is an endeavour undertaken by a group of professionals. It must be underpinned by sound pedagogical and e-learning principles and must incorporate elements for effective visual learning and visual design and a simple, consistent technology. © 2013 John Wiley & Sons A/S.
Ebisch, Sjoerd J H; Mantini, Dante; Romanelli, Roberta; Tommasi, Marco; Perrucci, Mauro G; Romani, Gian Luca; Colom, Roberto; Saggino, Aristide
2013-09-01
The brain is organized into functionally specific networks as characterized by intrinsic functional relationships within discrete sets of brain regions. However, it is poorly understood whether such functional networks are dynamically organized according to specific task-states. The anterior insular cortex (aIC)-dorsal anterior cingulate cortex (dACC)/medial frontal cortex (mFC) network has been proposed to play a central role in human cognitive abilities. The present functional magnetic resonance imaging (fMRI) study aimed at testing whether functional interactions of the aIC-dACC/mFC network in terms of temporally correlated patterns of neural activity across brain regions are dynamically modulated by transitory, ongoing task demands. For this purpose, functional interactions of the aIC-dACC/mFC network are compared during two distinguishable fluid reasoning tasks, Visualization and Induction. The results show an increased functional coupling of bilateral aIC with visual cortices in the occipital lobe during the Visualization task, whereas coupling of mFC with right anterior frontal cortex was enhanced during the Induction task. These task-specific modulations of functional interactions likely reflect ability related neural processing. Furthermore, functional connectivity strength between right aIC and right dACC/mFC reliably predicts general task performance. The findings suggest that the analysis of long-range functional interactions may provide complementary information about brain-behavior relationships. On the basis of our results, it is proposed that the aIC-dACC/mFC network contributes to the integration of task-common and task-specific information based on its within-network as well as its between-network dynamic functional interactions. Copyright © 2013 Elsevier Inc. All rights reserved.
Making memories: the development of long-term visual knowledge in children with visual agnosia.
Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.
Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia
Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599
Trying to Move Your Unseen Static Arm Modulates Visually-Evoked Kinesthetic Illusion
Metral, Morgane; Blettery, Baptiste; Bresciani, Jean-Pierre; Luyat, Marion; Guerraz, Michel
2013-01-01
Although kinesthesia is known to largely depend on afferent inflow, recent data suggest that central signals originating from volitional control (efferent outflow) could also be involved and interact with the former to build up a coherent percept. Evidence derives from both clinical and experimental observations where vision, which is of primary importance in kinesthesia, was systematically precluded. The purpose of the present experiment was to assess the role of volitional effort in kinesthesia when visual information is available. Participants (n=20) produced isometric contraction (10-20% of maximal voluntary force) of their right arm while their left arm, which image was reflected in a mirror, either was passively moved into flexion/extension by a motorized manipulandum, or remained static. The contraction of the right arm was either congruent with or opposite to the passive displacements of the left arm. Results revealed that in most trials, kinesthetic illusions were visually driven, and their occurrence and intensity were modulated by whether volitional effort was congruent or not with visual signals. These results confirm the impact of volitional effort in kinesthesia and demonstrate for the first time that these signals interact with visual afferents to offer a coherent and unified percept. PMID:24348909
Chen, Fu-Chen; Chen, Hsin-Lin; Tu, Jui-Hung; Tsai, Chia-Liang
2015-09-01
People often multi-task in their daily life. However, the mechanisms for the interaction between simultaneous postural and non-postural tasks have been controversial over the years. The present study investigated the effects of light digital touch on both postural sway and visual search accuracy for the purpose of assessing two hypotheses (functional integration and resource competition), which may explain the interaction between postural sway and the performance of a non-postural task. Participants (n=42, 20 male and 22 female) were asked to inspect a blank sheet of paper or visually search for target letters in a text block while a fingertip was in light contact with a stable surface (light touch, LT), or with both arms hanging at the sides of the body (no touch, NT). The results showed significant main effects of LT on reducing the magnitude of postural sway as well as enhancing visual search accuracy compared with the NT condition. The findings support the hypothesis of function integration, demonstrating that the modulation of postural sway can be modulated to improve the performance of a visual search task. Copyright © 2015 Elsevier B.V. All rights reserved.
Attention operates uniformly throughout the classical receptive field and the surround.
Verhoef, Bram-Ernst; Maunsell, John Hr
2016-08-22
Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround.
NASA Astrophysics Data System (ADS)
Mirel, Barbara; Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-02-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students' visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students' successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules.
Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-01-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students’ visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students’ successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules. PMID:26877625
Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro
2013-01-01
Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.
Top-down modulation of ventral occipito-temporal responses during visual word recognition.
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
2011-04-01
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
Neural Dynamics Underlying Target Detection in the Human Brain
Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.
2014-01-01
Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944
Reward modulates the effect of visual cortical microstimulation on perceptual decisions
Cicmil, Nela; Cumming, Bruce G; Parker, Andrew J; Krug, Kristine
2015-01-01
Effective perceptual decisions rely upon combining sensory information with knowledge of the rewards available for different choices. However, it is not known where reward signals interact with the multiple stages of the perceptual decision-making pathway and by what mechanisms this may occur. We combined electrical microstimulation of functionally specific groups of neurons in visual area V5/MT with performance-contingent reward manipulation, while monkeys performed a visual discrimination task. Microstimulation was less effective in shifting perceptual choices towards the stimulus preferences of the stimulated neurons when available reward was larger. Psychophysical control experiments showed this result was not explained by a selective change in response strategy on microstimulated trials. A bounded accumulation decision model, applied to analyse behavioural performance, revealed that the interaction of expected reward with microstimulation can be explained if expected reward modulates a sensory representation stage of perceptual decision-making, in addition to the better-known effects at the integration stage. DOI: http://dx.doi.org/10.7554/eLife.07832.001 PMID:26402458
Balconi, Michela; Vanutelli, Maria Elide
2016-01-01
The present research explored the effect of cross-modal integration of emotional cues (auditory and visual (AV)) compared with only visual (V) emotional cues in observing interspecies interactions. The brain activity was monitored when subjects processed AV and V situations, which represented an emotional (positive or negative), interspecies (human-animal) interaction. Congruence (emotionally congruous or incongruous visual and auditory patterns) was also modulated. electroencephalography brain oscillations (from delta to beta) were analyzed and the cortical source localization (by standardized Low Resolution Brain Electromagnetic Tomography) was applied to the data. Frequency band (mainly low-frequency delta and theta) showed a significant brain activity increasing in response to negative compared to positive interactions within the right hemisphere. Moreover, differences were found based on stimulation type, with an increased effect for AV compared with V. Finally, delta band supported a lateralized right dorsolateral prefrontal cortex (DLPFC) activity in response to negative and incongruous interspecies interactions, mainly for AV. The contribution of cross-modality, congruence (incongruous patterns), and lateralization (right DLPFC) in response to interspecies emotional interactions was discussed at light of a "negative lateralized effect."
Maestas, Gabrielle; Hu, Jiyao; Trevino, Jessica; Chunduru, Pranathi; Kim, Seung-Jae; Lee, Hyunglae
2018-01-01
The use of visual feedback in gait rehabilitation has been suggested to promote recovery of locomotor function by incorporating interactive visual components. Our prior work demonstrated that visual feedback distortion of changes in step length symmetry entails an implicit or unconscious adaptive process in the subjects’ spatial gait patterns. We investigated whether the effect of the implicit visual feedback distortion would persist at three different walking speeds (slow, self-preferred and fast speeds) and how different walking speeds would affect the amount of adaption. In the visual feedback distortion paradigm, visual vertical bars portraying subjects’ step lengths were distorted so that subjects perceived their step lengths to be asymmetric during testing. Measuring the adjustments in step length during the experiment showed that healthy subjects made spontaneous modulations away from actual symmetry in response to the implicit visual distortion, no matter the walking speed. In all walking scenarios, the effects of implicit distortion became more significant at higher distortion levels. In addition, the amount of adaptation induced by the visual distortion was significantly greater during walking at preferred or slow speed than at the fast speed. These findings indicate that although a link exists between supraspinal function through visual system and human locomotion, sensory feedback control for locomotion is speed-dependent. Ultimately, our results support the concept that implicit visual feedback can act as a dominant form of feedback in gait modulation, regardless of speed. PMID:29632481
ERIC Educational Resources Information Center
Varma, Keisha; Linn, Marcia C.
2012-01-01
In this work, we examine middle school students' understanding of the greenhouse effect and global warming. We designed and refined a technology-enhanced curriculum module called "Global Warming: Virtual Earth". In the module activities, students conduct virtual experiments with a visualization of the greenhouse effect. They analyze data and draw…
MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.
Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk
2016-03-18
Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .
The human mirror neuron system: A link between action observation and social skills
Pineda, Jaime A.; Ramachandran, Vilayanur S.
2007-01-01
The discovery of the mirror neuron system (MNS) has led researchers to speculate that this system evolved from an embodied visual recognition apparatus in monkey to a system critical for social skills in humans. It is accepted that the MNS is specialized for processing animate stimuli, although the degree to which social interaction modulates the firing of mirror neurons has not been investigated. In the current study, EEG mu wave suppression was used as an index of MNS activity. Data were collected while subjects viewed four videos: (1) Visual White Noise: baseline, (2) Non-interacting: three individuals tossed a ball up in the air to themselves, (3) Social Action, Spectator: three individuals tossed a ball to each other and (4) Social Action, Interactive: similar to video 3 except occasionally the ball would be thrown off the screen toward the viewer. The mu wave was modulated by the degree of social interaction, with the Non-interacting condition showing the least suppression, followed by the Social Action, Spectator condition and the Social Action, Interactive condition showing the most suppression. These data suggest that the human MNS is specialized not only for processing animate stimuli, but specifically stimuli with social relevance. PMID:18985120
Interactions Dominate the Dynamics of Visual Cognition
Stephen, Damian G.; Mirman, Daniel
2010-01-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. PMID:20070957
Compression and reflection of visually evoked cortical waves
Xu, Weifeng; Huang, Xiaoying; Takagaki, Kentaroh; Wu, Jian-young
2007-01-01
Summary Neuronal interactions between primary and secondary visual cortical areas are important for visual processing, but the spatiotemporal patterns of the interaction are not well understood. We used voltage-sensitive dye imaging to visualize neuronal activity in rat visual cortex and found novel visually evoked waves propagating from V1 to other visual areas. A primary wave originated in the monocular area of V1 and was “compressed” when propagating to V2. A reflected wave initiated after compression and propagated backward into V1. The compression occurred at the V1/V2 border, and local GABAA inhibition is important for the compression. The compression/reflection pattern provides a two-phase modulation: V1 is first depolarized by the primary wave and then V1 and V2 are simultaneously depolarized by the reflected and primary waves, respectively. The compression/reflection pattern only occurred for evoked but not for spontaneous waves, suggesting that it is organized by an internal mechanism associated with visual processing. PMID:17610821
Frontal–Occipital Connectivity During Visual Search
Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas
2012-01-01
Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993
Wallace, Deanna L.
2017-01-01
The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568
Graph Visualization for RDF Graphs with SPARQL-EndPoints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Bond, Nathaniel
2014-07-11
RDF graphs are hard to visualize as triples. This software module is a web interface that connects to a SPARQL endpoint and retrieves graph data that the user can explore interactively and seamlessly. The software written in python and JavaScript has been tested to work on screens as little as the smart phones to large screens such as EVEREST.
Attention operates uniformly throughout the classical receptive field and the surround
Verhoef, Bram-Ernst; Maunsell, John HR
2016-01-01
Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround. DOI: http://dx.doi.org/10.7554/eLife.17256.001 PMID:27547989
Investigation of Interactive Online Visual Tools for the Learning of Mathematics
ERIC Educational Resources Information Center
Jacobs, K. L.
2005-01-01
For many years, educators have been discussing benefits of educational practices such as the use of real-world examples, visualisation, interactivity, constructivism, self-paced learning and self-paced testing. Macromedia Flash MX has been used to develop online modules for the course Differential Equations offered at the University of South…
Interactive BIM-Enabled Safety Training Piloted in Construction Education
ERIC Educational Resources Information Center
Clevenger, Caroline; Lopez del Puerto, Carla; Glick, Scott
2015-01-01
This paper documents and assesses the development of a construction safety training module featuring interactive, BIM-enabled, 3D visualizations to test if such a tool can enhance safety training related to scaffolds. This research documents the technical challenges and the lessons learned through the development and administration of a prototype…
USDA-ARS?s Scientific Manuscript database
Age-related macular degeneration (AMD) is a leading cause of visual impairment worldwide. Genetics and diet contribute to the relative risk for developing AMD, but their interactions are poorly understood. Genetic variations in Complement Factor H (CFH), and dietary glycemic index (GI) are major ris...
ERIC Educational Resources Information Center
Kesner, Michael H.; Linzey, Alicia V.
2005-01-01
InterActive Physiology (IAP) is one of a new generation of anatomy and physiology learning aids with a broader range of sensory inputs than is possible from a static textbook or moderately dynamic lecture. This best-selling software has modules covering the muscular, respiratory, urinary, cardiovascular, and nervous systems plus a module on fluids…
Interactions dominate the dynamics of visual cognition.
Stephen, Damian G; Mirman, Daniel
2010-04-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. Copyright 2009 Elsevier B.V. All rights reserved.
Network model of top-down influences on local gain and contextual interactions in visual cortex.
Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D
2013-10-22
The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.
Threat as a feature in visual semantic object memory.
Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John
2013-08-01
Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.
Towards a Web-Enabled Geovisualization and Analytics Platform for the Energy and Water Nexus
NASA Astrophysics Data System (ADS)
Sanyal, J.; Chandola, V.; Sorokine, A.; Allen, M.; Berres, A.; Pang, H.; Karthik, R.; Nugent, P.; McManamay, R.; Stewart, R.; Bhaduri, B. L.
2017-12-01
Interactive data analytics are playing an increasingly vital role in the generation of new, critical insights regarding the complex dynamics of the energy/water nexus (EWN) and its interactions with climate variability and change. Integration of impacts, adaptation, and vulnerability (IAV) science with emerging, and increasingly critical, data science capabilities offers a promising potential to meet the needs of the EWN community. To enable the exploration of pertinent research questions, a web-based geospatial visualization platform is being built that integrates a data analysis toolbox with advanced data fusion and data visualization capabilities to create a knowledge discovery framework for the EWN. The system, when fully built out, will offer several geospatial visualization capabilities including statistical visual analytics, clustering, principal-component analysis, dynamic time warping, support uncertainty visualization and the exploration of data provenance, as well as support machine learning discoveries to render diverse types of geospatial data and facilitate interactive analysis. Key components in the system architecture includes NASA's WebWorldWind, the Globus toolkit, postgresql, as well as other custom built software modules.
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kullmann, Stephanie; Pape, Anna-Antonia; Heni, Martin; Ketterer, Caroline; Schick, Fritz; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert; Veit, Ralf
2013-05-01
In order to adequately explore the neurobiological basis of eating behavior of humans and their changes with body weight, interactions between brain areas or networks need to be investigated. In the current functional magnetic resonance imaging study, we examined the modulating effects of stimulus category (food vs. nonfood), caloric content of food, and body weight on the time course and functional connectivity of 5 brain networks by means of independent component analysis in healthy lean and overweight/obese adults. These functional networks included motor sensory, default-mode, extrastriate visual, temporal visual association, and salience networks. We found an extensive modulation elicited by food stimuli in the 2 visual and salience networks, with a dissociable pattern in the time course and functional connectivity between lean and overweight/obese subjects. Specifically, only in lean subjects, the temporal visual association network was modulated by the stimulus category and the salience network by caloric content, whereas overweight and obese subjects showed a generalized augmented response in the salience network. Furthermore, overweight/obese subjects showed changes in functional connectivity in networks important for object recognition, motivational salience, and executive control. These alterations could potentially lead to top-down deficiencies driving the overconsumption of food in the obese population.
O'Shea, Jacinta; Jensen, Ole; Bergmann, Til O.
2015-01-01
Covertly directing visuospatial attention produces a frequency-specific modulation of neuronal oscillations in occipital and parietal cortices: anticipatory alpha (8–12 Hz) power decreases contralateral and increases ipsilateral to attention, whereas stimulus-induced gamma (>40 Hz) power is boosted contralaterally and attenuated ipsilaterally. These modulations must be under top-down control; however, the control mechanisms are not yet fully understood. Here we investigated the causal contribution of the human frontal eye field (FEF) by combining repetitive transcranial magnetic stimulation (TMS) with subsequent magnetoencephalography. Following inhibitory theta burst stimulation to the left FEF, right FEF, or vertex, participants performed a visual discrimination task requiring covert attention to either visual hemifield. Both left and right FEF TMS caused marked attenuation of alpha modulation in the occipitoparietal cortex. Notably, alpha modulation was consistently reduced in the hemisphere contralateral to stimulation, leaving the ipsilateral hemisphere relatively unaffected. Additionally, right FEF TMS enhanced gamma modulation in left visual cortex. Behaviorally, TMS caused a relative slowing of response times to targets contralateral to stimulation during the early task period. Our results suggest that left and right FEF are causally involved in the attentional top-down control of anticipatory alpha power in the contralateral visual system, whereas a right-hemispheric dominance seems to exist for control of stimulus-induced gamma power. These findings contrast the assumption of primarily intrahemispheric connectivity between FEF and parietal cortex, emphasizing the relevance of interhemispheric interactions. The contralaterality of effects may result from a transient functional reorganization of the dorsal attention network after inhibition of either FEF. PMID:25632139
A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.
Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T
2007-09-01
To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.
BioSIGHT: Interactive Visualization Modules for Science Education
NASA Technical Reports Server (NTRS)
Wong, Wee Ling
1998-01-01
Redefining science education to harness emerging integrated media technologies with innovative pedagogical goals represents a unique challenge. The Integrated Media Systems Center (IMSC) is the only engineering research center in the area of multimedia and creative technologies sponsored by the National Science Foundation. The research program at IMSC is focused on developing advanced technologies that address human-computer interfaces, database management, and high-speed network capabilities. The BioSIGHT project at is a demonstration technology project in the area of education that seeks to address how such emerging multimedia technologies can make an impact on science education. The scope of this project will help solidify NASA's commitment for the development of innovative educational resources that promotes science literacy for our students and the general population as well. These issues must be addressed as NASA marches toward the goal of enabling human space exploration that requires an understanding of life sciences in space. The IMSC BioSIGHT lab was established with the purpose of developing a novel methodology that will map a high school biology curriculum into a series of interactive visualization modules that can be easily incorporated into a space biology curriculum. Fundamental concepts in general biology must be mastered in order to allow a better understanding and application for space biology. Interactive visualization is a powerful component that can capture the students' imagination, facilitate their assimilation of complex ideas, and help them develop integrated views of biology. These modules will augment the role of the teacher and will establish the value of student-centered interactivity, both in an individual setting as well as in a collaborative learning environment. Students will be able to interact with the content material, explore new challenges, and perform virtual laboratory simulations. The BioSIGHT effort is truly cross-disciplinary in nature and requires expertise from many areas including Biology, Computer Science Electrical Engineering, Education, and the Cognitive Sciences. The BioSIGHT team includes a scientific illustrator, educational software designer, computer programmers as well as IMSC graduate and undergraduate students.
Age-dependent modulation of the somatosensory network upon eye closure.
Brodoehl, Stefan; Klingner, Carsten; Witte, Otto W
2016-02-01
Eye closure even in complete darkness can improve somatosensory perception by switching the brain to a uni-sensory processing mode. This causes an increased information flow between the thalamus and the somatosensory cortex while decreasing modulation by the visual cortex. Previous work suggests that these modulations are age-dependent and that the benefit in somatosensory performance due to eye closing diminishes with age. The cause of this age-dependency and to what extent somatosensory processing is involved remains unclear. Therefore, we intended to characterize the underlying age-dependent modifications in the interaction and connectivity of different sensory networks caused by eye closure. We performed functional MR-imaging with tactile stimulation of the right hand under the conditions of opened and closed eyes in healthy young and elderly participants. Conditional Granger causality analysis was performed to assess the somatosensory and visual networks, including the thalamus. Independent of age, eye closure improved the information transfer from the thalamus to and within the somatosensory cortex. However, beyond that, we found an age-dependent recruitment strategy. Whereas young participants were characterized by an optimized information flow within the relays of the somatosensory network, elderly participants revealed a stronger modulatory influence of the visual network upon the somatosensory cortex. Our results demonstrate that the modulation of the somatosensory and visual networks by eye closure diminishes with age and that the dominance of the visual system is more pronounced in the aging brain. Copyright © 2015 Elsevier B.V. All rights reserved.
Model of rhythmic ball bouncing using a visually controlled neural oscillator.
Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro
2017-10-01
The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.
An ERP investigation of visual word recognition in syllabary scripts.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2013-06-01
The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.
An ERP Investigation of Visual Word Recognition in Syllabary Scripts
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2013-01-01
The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278
Störmer, Viola S; Passow, Susanne; Biesenack, Julia; Li, Shu-Chen
2012-05-01
Attention and working memory are fundamental for selecting and maintaining behaviorally relevant information. Not only do both processes closely intertwine at the cognitive level, but they implicate similar functional brain circuitries, namely the frontoparietal and the frontostriatal networks, which are innervated by cholinergic and dopaminergic pathways. Here we review the literature on cholinergic and dopaminergic modulations of visual-spatial attention and visual working memory processes to gain insights on aging-related changes in these processes. Some extant findings have suggested that the cholinergic system plays a role in the orienting of attention to enable the detection and discrimination of visual information, whereas the dopaminergic system has mainly been associated with working memory processes such as updating and stabilizing representations. However, since visual-spatial attention and working memory processes are not fully dissociable, there is also evidence of interacting cholinergic and dopaminergic modulations of both processes. We further review gene-cognition association studies that have shown that individual differences in visual-spatial attention and visual working memory are associated with acetylcholine- and dopamine-relevant genes. The efficiency of these 2 transmitter systems declines substantially during healthy aging. These declines, in part, contribute to age-related deficits in attention and working memory functions. We report novel data showing an effect of dopamine COMT gene on spatial updating processes in older but not in younger adults, indicating potential magnification of genetic effects in old age.
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Differential effects of ADORA2A gene variations in pre-attentive visual sensory memory subprocesses.
Beste, Christian; Stock, Ann-Kathrin; Ness, Vanessa; Epplen, Jörg T; Arning, Larissa
2012-08-01
The ADORA2A gene encodes the adenosine A(2A) receptor that is highly expressed in the striatum where it plays a role in modulating glutamatergic and dopaminergic transmission. Glutamatergic signaling has been suggested to play a pivotal role in cognitive functions related to the pre-attentive processing of external stimuli. Yet, the precise molecular mechanism of these processes is poorly understood. Therefore, we aimed to investigate whether ADORA2A gene variation has modulating effects on visual pre-attentive sensory memory processing. Studying two polymorphisms, rs5751876 and rs2298383, in 199 healthy control subjects who performed a partial-report paradigm, we find that ADORA2A variation is associated with differences in the efficiency of pre-attentive sensory memory sub-processes. We show that especially the initial visual availability of stimulus information is rendered more efficiently in the homozygous rare genotype groups. Processes related to the transfer of information into working memory and the duration of visual sensory (iconic) memory are compromised in the homozygous rare genotype groups. Our results show a differential genotype-dependent modulation of pre-attentive sensory memory sub-processes. Hence, we assume that this modulation may be due to differential effects of increased adenosine A(2A) receptor signaling on glutamatergic transmission and striatal medium spiny neuron (MSN) interaction. Copyright © 2011 Elsevier B.V. and ECNP. All rights reserved.
ERIC Educational Resources Information Center
Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin
2014-01-01
This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…
Ibrahim, Leena A.; Mesik, Lukas; Ji, Xu-ying; Fang, Qi; Li, Hai-fu; Li, Ya-tang; Zingg, Brian; Zhang, Li I.; Tao, Huizhong Whit
2016-01-01
Summary Cross-modality interaction in sensory perception is advantageous for animals’ survival. How cortical sensory processing is cross-modally modulated and what are the underlying neural circuits remain poorly understood. In mouse primary visual cortex (V1), we discovered that orientation selectivity of layer (L)2/3 but not L4 excitatory neurons was sharpened in the presence of sound or optogenetic activation of projections from primary auditory cortex (A1) to V1. The effect was manifested by decreased average visual responses yet increased responses at the preferred orientation. It was more pronounced at lower visual contrast, and was diminished by suppressing L1 activity. L1 neurons were strongly innervated by A1-V1 axons and excited by sound, while visual responses of L2/3 vasoactive intestinal peptide (VIP) neurons were suppressed by sound, both preferentially at the cell's preferred orientation. These results suggest that the cross-modality modulation is achieved primarily through L1 neuron and L2/3 VIP-cell mediated inhibitory and disinhibitory circuits. PMID:26898778
Cognitive processing in the primary visual cortex: from perception to memory.
Supèr, Hans
2002-01-01
The primary visual cortex is the first cortical area of the visual system that receives information from the external visual world. Based on the receptive field characteristics of the neurons in this area, it has been assumed that the primary visual cortex is a pure sensory area extracting basic elements of the visual scene. This information is then subsequently further processed upstream in the higher-order visual areas and provides us with perception and storage of the visual environment. However, recent findings show that such neural implementations are observed in the primary visual cortex. These neural correlates are expressed by the modulated activity of the late response of a neuron to a stimulus, and most likely depend on recurrent interactions between several areas of the visual system. This favors the concept of a distributed nature of visual processing in perceptual organization.
How Dynamic Visualization Technology can Support Molecular Reasoning
NASA Astrophysics Data System (ADS)
Levy, Dalit
2013-10-01
This paper reports the results of a study aimed at exploring the advantages of dynamic visualization for the development of better understanding of molecular processes. We designed a technology-enhanced curriculum module in which high school chemistry students conduct virtual experiments with dynamic molecular visualizations of solid, liquid, and gas. They interact with the visualizations and carry out inquiry activities to make and refine connections between observable phenomena and atomic level processes related to phase change. The explanations proposed by 300 pairs of students in response to pre/post-assessment items have been analyzed using a scale for measuring the level of molecular reasoning. Results indicate that from pretest to posttest, students make progress in their level of molecular reasoning and are better able to connect intermolecular forces and phase change in their explanations. The paper presents the results through the lens of improvement patterns and the metaphor of the "ladder of molecular reasoning," and discusses how this adds to our understanding of the benefits of interacting with dynamic molecular visualizations.
Marshall, Tom R; O'Shea, Jacinta; Jensen, Ole; Bergmann, Til O
2015-01-28
Covertly directing visuospatial attention produces a frequency-specific modulation of neuronal oscillations in occipital and parietal cortices: anticipatory alpha (8-12 Hz) power decreases contralateral and increases ipsilateral to attention, whereas stimulus-induced gamma (>40 Hz) power is boosted contralaterally and attenuated ipsilaterally. These modulations must be under top-down control; however, the control mechanisms are not yet fully understood. Here we investigated the causal contribution of the human frontal eye field (FEF) by combining repetitive transcranial magnetic stimulation (TMS) with subsequent magnetoencephalography. Following inhibitory theta burst stimulation to the left FEF, right FEF, or vertex, participants performed a visual discrimination task requiring covert attention to either visual hemifield. Both left and right FEF TMS caused marked attenuation of alpha modulation in the occipitoparietal cortex. Notably, alpha modulation was consistently reduced in the hemisphere contralateral to stimulation, leaving the ipsilateral hemisphere relatively unaffected. Additionally, right FEF TMS enhanced gamma modulation in left visual cortex. Behaviorally, TMS caused a relative slowing of response times to targets contralateral to stimulation during the early task period. Our results suggest that left and right FEF are causally involved in the attentional top-down control of anticipatory alpha power in the contralateral visual system, whereas a right-hemispheric dominance seems to exist for control of stimulus-induced gamma power. These findings contrast the assumption of primarily intrahemispheric connectivity between FEF and parietal cortex, emphasizing the relevance of interhemispheric interactions. The contralaterality of effects may result from a transient functional reorganization of the dorsal attention network after inhibition of either FEF. Copyright © 2015 the authors 0270-6474/15/351638-10$15.00/0.
Schendan, Haline E.; Ganis, Giorgio
2015-01-01
People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701
Policing Fish at Boston's Museum of Science: Studying Audiovisual Interaction in the Wild
Sun, Yile; Hickey, Timothy J.; Shinn-Cunningham, Barbara; Sekuler, Robert
2015-01-01
Boston's Museum of Science supports researchers whose projects advance science and provide educational opportunities to the Museum's visitors. For our project, 60 visitors to the Museum played “Fish Police!!,” a video game that examines audiovisual integration, including the ability to ignore irrelevant sensory information. Players, who ranged in age from 6 to 82 years, made speeded responses to computer-generated fish that swam rapidly across a tablet display. Responses were to be based solely on the rate (6 or 8 Hz) at which a fish's size modulated, sinusoidally growing and shrinking. Accompanying each fish was a task-irrelevant broadband sound, amplitude modulated at either 6 or 8 Hz. The rates of visual and auditory modulation were either Congruent (both 6 Hz or 8 Hz) or Incongruent (6 and 8 or 8 and 6 Hz). Despite being instructed to ignore the sound, players of all ages responded more accurately and faster when a fish's auditory and visual signatures were Congruent. In a controlled laboratory setting, a related task produced comparable results, demonstrating the robustness of the audiovisual interaction reported here. Some suggestions are made for conducting research in public settings. PMID:27433321
Policing Fish at Boston's Museum of Science: Studying Audiovisual Interaction in the Wild.
Goldberg, Hannah; Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert
2015-08-01
Boston's Museum of Science supports researchers whose projects advance science and provide educational opportunities to the Museum's visitors. For our project, 60 visitors to the Museum played "Fish Police!!," a video game that examines audiovisual integration, including the ability to ignore irrelevant sensory information. Players, who ranged in age from 6 to 82 years, made speeded responses to computer-generated fish that swam rapidly across a tablet display. Responses were to be based solely on the rate (6 or 8 Hz) at which a fish's size modulated, sinusoidally growing and shrinking. Accompanying each fish was a task-irrelevant broadband sound, amplitude modulated at either 6 or 8 Hz. The rates of visual and auditory modulation were either Congruent (both 6 Hz or 8 Hz) or Incongruent (6 and 8 or 8 and 6 Hz). Despite being instructed to ignore the sound, players of all ages responded more accurately and faster when a fish's auditory and visual signatures were Congruent. In a controlled laboratory setting, a related task produced comparable results, demonstrating the robustness of the audiovisual interaction reported here. Some suggestions are made for conducting research in public settings.
Oral contraceptive therapy modulates hemispheric asymmetry in spatial attention.
Cicinelli, Ettore; De Tommaso, Marina; Cianci, Antonio; Colacurci, Nicola; Rella, Leonarda; Loiudice, Luisa; Cicinelli, Maria Vittoria; Livrea, Paolo
2011-12-01
Functional cerebral asymmetries (FCAs) are known to fluctuate across the menstrual cycle. The visual line-bisection task administered to normally cycling women showed different patterns of the interhemispheric interactions during menses and the midluteal cycle phase. However, the contribution of estrogens and progestins hormones to this phenomenon is still unclear. The aim of our study was to show a variation of FCAs in women administered oral contraceptives (OCs) using the visual line-bisection task. Visual line-bisection task with three horizontal lines was administered to 36 healthy women taking a 21-day OC. Twenty-nine patients were right handed. The task was administered during OC intake (day 10) and at the end of the pill-free period. The right-handed women showed a significant leftward bias of veridical center on the first and third lines during OC intake compared with an opposite rightward bias during the pill-free period. The same phenomenon of contralateral deviation was observed in left-handed women on day 10 of OC intake. The results of this study confirm a hormonal modulation on interhemispheric interaction and suggest that OCs may improve the interhemispheric interaction reducing FCAs compared with the low hormone level period. This opens new insights in OC prescription and choice of administration schedule in order to improve cognitive performances. Copyright © 2011 Elsevier Inc. All rights reserved.
Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi
2018-05-10
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.
Selective attention determines emotional responses to novel visual stimuli.
Raymond, Jane E; Fenske, Mark J; Tavassoli, Nader T
2003-11-01
Distinct complex brain systems support selective attention and emotion, but connections between them suggest that human behavior should reflect reciprocal interactions of these systems. Although there is ample evidence that emotional stimuli modulate attentional processes, it is not known whether attention influences emotional behavior. Here we show that evaluation of the emotional tone (cheery/dreary) of complex but meaningless visual patterns can be modulated by the prior attentional state (attending vs. ignoring) used to process each pattern in a visual selection task. Previously ignored patterns were evaluated more negatively than either previously attended or novel patterns. Furthermore, this emotional devaluation of distracting stimuli was robust across different emotional contexts and response scales. Finding that negative affective responses are specifically generated for ignored stimuli points to a new functional role for attention and elaborates the link between attention and emotion. This finding also casts doubt on the conventional marketing wisdom that any exposure is good exposure.
Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie
2003-05-08
We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.
A neural model of the temporal dynamics of figure-ground segregation in motion perception.
Raudies, Florian; Neumann, Heiko
2010-03-01
How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.
Asymmetric Dichoptic Masking in Visual Cortex of Amblyopic Macaque Monkeys
Shooner, Christopher; Hallum, Luke E.; García-Marín, Virginia; Kiorpes, Lynne
2017-01-01
In amblyopia, abnormal visual experience leads to an extreme form of eye dominance, in which vision through the nondominant eye is degraded. A key aspect of this disorder is perceptual suppression: the image seen by the stronger eye often dominates during binocular viewing, blocking the image of the weaker eye from reaching awareness. Interocular suppression is the focus of ongoing work aimed at understanding and treating amblyopia, yet its physiological basis remains unknown. We measured binocular interactions in visual cortex of anesthetized amblyopic monkeys (female Macaca nemestrina), using 96-channel “Utah” arrays to record from populations of neurons in V1 and V2. In an experiment reported recently (Hallum et al., 2017), we found that reduced excitatory input from the amblyopic eye (AE) revealed a form of balanced binocular suppression that is unaltered in amblyopia. Here, we report on the modulation of the gain of excitatory signals from the AE by signals from its dominant fellow eye (FE). Using a dichoptic masking technique, we found that AE responses to grating stimuli were attenuated by the presentation of a noise mask to the FE, as in a normal control animal. Responses to FE stimuli, by contrast, could not be masked from the AE. We conclude that a weakened ability of the amblyopic eye to modulate cortical response gain creates an imbalance of suppression that favors the dominant eye. SIGNIFICANCE STATEMENT In amblyopia, vision in one eye is impaired as a result of abnormal early visual experience. Behavioral observations in humans with amblyopia suggest that much of their visual loss is due to active suppression of their amblyopic eye. Here we describe experiments in which we studied binocular interactions in macaques with experimentally induced amblyopia. In normal monkeys, the gain of neuronal response to stimulation of one eye is modulated by contrast in the other eye, but in monkeys with amblyopia the balance of gain modulation is altered so that the weaker, amblyopic eye has little effect while the stronger fellow eye has a strong effect. This asymmetric suppression may be a key component of the perceptual losses in amblyopia. PMID:28760867
Asymmetric Dichoptic Masking in Visual Cortex of Amblyopic Macaque Monkeys.
Shooner, Christopher; Hallum, Luke E; Kumbhani, Romesh D; García-Marín, Virginia; Kelly, Jenna G; Majaj, Najib J; Movshon, J Anthony; Kiorpes, Lynne
2017-09-06
In amblyopia, abnormal visual experience leads to an extreme form of eye dominance, in which vision through the nondominant eye is degraded. A key aspect of this disorder is perceptual suppression: the image seen by the stronger eye often dominates during binocular viewing, blocking the image of the weaker eye from reaching awareness. Interocular suppression is the focus of ongoing work aimed at understanding and treating amblyopia, yet its physiological basis remains unknown. We measured binocular interactions in visual cortex of anesthetized amblyopic monkeys (female Macaca nemestrina ), using 96-channel "Utah" arrays to record from populations of neurons in V1 and V2. In an experiment reported recently (Hallum et al., 2017), we found that reduced excitatory input from the amblyopic eye (AE) revealed a form of balanced binocular suppression that is unaltered in amblyopia. Here, we report on the modulation of the gain of excitatory signals from the AE by signals from its dominant fellow eye (FE). Using a dichoptic masking technique, we found that AE responses to grating stimuli were attenuated by the presentation of a noise mask to the FE, as in a normal control animal. Responses to FE stimuli, by contrast, could not be masked from the AE. We conclude that a weakened ability of the amblyopic eye to modulate cortical response gain creates an imbalance of suppression that favors the dominant eye. SIGNIFICANCE STATEMENT In amblyopia, vision in one eye is impaired as a result of abnormal early visual experience. Behavioral observations in humans with amblyopia suggest that much of their visual loss is due to active suppression of their amblyopic eye. Here we describe experiments in which we studied binocular interactions in macaques with experimentally induced amblyopia. In normal monkeys, the gain of neuronal response to stimulation of one eye is modulated by contrast in the other eye, but in monkeys with amblyopia the balance of gain modulation is altered so that the weaker, amblyopic eye has little effect while the stronger fellow eye has a strong effect. This asymmetric suppression may be a key component of the perceptual losses in amblyopia. Copyright © 2017 the authors 0270-6474/17/378734-08$15.00/0.
Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli
Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.
2010-01-01
Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356
Price, Erika Leemann; Mackenzie, Thomas D; Metlay, Joshua P; Camargo, Carlos A; Gonzales, Ralph
2011-12-01
Over-use of antibiotics for acute respiratory infections (ARIs) increases antimicrobial resistance, treatment costs, and side effects. Patient desire for antibiotics contributes to over-use. To explore whether a point-of-care interactive computerized education module increases patient knowledge and decreases desire for antibiotics. Bilingual (English/Spanish) interactive kiosks were available in 8 emergency departments as part of a multidimensional intervention to reduce antibiotic prescribing for ARIs. The symptom-tailored module included assessment of symptoms, knowledge about ARIs (3 items), and desire for antibiotics on a 10-point visual analog scale. Multivariable analysis assessed predictors of change in desire for antibiotics. Of 686 adults with ARI symptoms, 63% initially thought antibiotics might help. The proportion of patients with low (1-3 on the scale) desire for antibiotics increased from 22% pre-module to 49% post-module (p<.001). Self-report of "learning something new" was associated with decreased desire for antibiotics, after adjusting for baseline characteristics (p=.001). An interactive educational kiosk improved knowledge about antibiotics and ARIs. Learning correlated with changes in personal desire for antibiotics. By reducing desire for antibiotics, point-of-care interactive educational computer technology may help decrease inappropriate use for antibiotics for ARIs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Bastos, Andre M; Briggs, Farran; Alitto, Henry J; Mangun, George R; Usrey, W Martin
2014-05-28
Oscillatory synchronization of neuronal activity has been proposed as a mechanism to modulate effective connectivity between interacting neuronal populations. In the visual system, oscillations in the gamma-frequency range (30-100 Hz) are thought to subserve corticocortical communication. To test whether a similar mechanism might influence subcortical-cortical communication, we recorded local field potential activity from retinotopically aligned regions in the lateral geniculate nucleus (LGN) and primary visual cortex (V1) of alert macaque monkeys viewing stimuli known to produce strong cortical gamma-band oscillations. As predicted, we found robust gamma-band power in V1. In contrast, visual stimulation did not evoke gamma-band activity in the LGN. Interestingly, an analysis of oscillatory phase synchronization of LGN and V1 activity identified synchronization in the alpha (8-14 Hz) and beta (15-30 Hz) frequency bands. Further analysis of directed connectivity revealed that alpha-band interactions mediated corticogeniculate feedback processing, whereas beta-band interactions mediated geniculocortical feedforward processing. These results demonstrate that although the LGN and V1 display functional interactions in the lower frequency bands, gamma-band activity in the alert monkey is largely an emergent property of cortex. Copyright © 2014 the authors 0270-6474/14/347639-06$15.00/0.
Interaction Between Spatial and Feature Attention in Posterior Parietal Cortex
Ibos, Guilhem; Freedman, David J.
2016-01-01
Summary Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task which required monkeys to detect specific conjunctions of color, motion-direction, and stimulus position. Here we show that FBA and SBA potentiate each other’s effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. PMID:27499082
Interaction between Spatial and Feature Attention in Posterior Parietal Cortex.
Ibos, Guilhem; Freedman, David J
2016-08-17
Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task that required monkeys to detect specific conjunctions of color, motion direction, and stimulus position. Here we show that FBA and SBA potentiate each other's effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. Copyright © 2016 Elsevier Inc. All rights reserved.
Top-down processing of symbolic meanings modulates the visual word form area.
Song, Yiying; Tian, Moqian; Liu, Jia
2012-08-29
Functional magnetic resonance imaging (fMRI) studies on humans have identified a region in the left middle fusiform gyrus consistently activated by written words. This region is called the visual word form area (VWFA). Recently, a hypothesis, called the interactive account, is proposed that to effectively analyze the bottom-up visual properties of words, the VWFA receives predictive feedback from higher-order regions engaged in processing sounds, meanings, or actions associated with words. Further, this top-down influence on the VWFA is independent of stimulus formats. To test this hypothesis, we used fMRI to examine whether a symbolic nonword object (e.g., the Eiffel Tower) intended to represent something other than itself (i.e., Paris) could activate the VWFA. We found that scenes associated with symbolic meanings elicited a higher VWFA response than those not associated with symbolic meanings, and such top-down modulation on the VWFA can be established through short-term associative learning, even across modalities. In addition, the magnitude of the symbolic effect observed in the VWFA was positively correlated with the subjective experience on the strength of symbol-referent association across individuals. Therefore, the VWFA is likely a neural substrate for the interaction of the top-down processing of symbolic meanings with the analysis of bottom-up visual properties of sensory inputs, making the VWFA the location where the symbolic meaning of both words and nonword objects is represented.
Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P.; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2016-01-01
Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated with greater deficits in amblyopic eye contrast sensitivity and visual acuity. We tested whether transcranial direct current stimulation (tDCS) of the visual cortex would modulate VEP amplitude and contrast sensitivity in adults with amblyopia. tDCS can transiently alter cortical excitability and may influence suppressive neural interactions. Twenty-one patients with amblyopia and twenty-seven controls completed separate sessions of anodal (a-), cathodal (c-) and sham (s-) visual cortex tDCS. A-tDCS transiently and significantly increased VEP amplitudes for amblyopic, fellow and control eyes and contrast sensitivity for amblyopic and control eyes. C-tDCS decreased VEP amplitude and contrast sensitivity and s-tDCS had no effect. These results suggest that tDCS can modulate visual cortex responses to information from adult amblyopic eyes and provide a foundation for future clinical studies of tDCS in adults with amblyopia. PMID:26763954
Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2016-01-14
Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated with greater deficits in amblyopic eye contrast sensitivity and visual acuity. We tested whether transcranial direct current stimulation (tDCS) of the visual cortex would modulate VEP amplitude and contrast sensitivity in adults with amblyopia. tDCS can transiently alter cortical excitability and may influence suppressive neural interactions. Twenty-one patients with amblyopia and twenty-seven controls completed separate sessions of anodal (a-), cathodal (c-) and sham (s-) visual cortex tDCS. A-tDCS transiently and significantly increased VEP amplitudes for amblyopic, fellow and control eyes and contrast sensitivity for amblyopic and control eyes. C-tDCS decreased VEP amplitude and contrast sensitivity and s-tDCS had no effect. These results suggest that tDCS can modulate visual cortex responses to information from adult amblyopic eyes and provide a foundation for future clinical studies of tDCS in adults with amblyopia.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.
Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara
2017-01-01
Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
Making Controlled Experimentation More Informative in Inquiry Investigations
ERIC Educational Resources Information Center
McElhaney, Kevin Wei Hong
2010-01-01
This dissertation incorporates three studies that examine how the design of inquiry based science instruction, dynamic visualizations, and guidance for experimentation contribute to physics students' understanding of science. I designed a week-long, technology-enhanced inquiry module on car collisions that logs students' interactions with a…
3D Immersive Visualization with Astrophysical Data
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2017-01-01
We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.
Spherical Panoramas for Astrophysical Data Visualization
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2017-05-01
Data immersion has advantages in astrophysical visualization. Complex multi-dimensional data and phase spaces can be explored in a seamless and interactive viewing environment. Putting the user in the data is a first step toward immersive data analysis. We present a technique for creating 360° spherical panoramas with astrophysical data. The three-dimensional software package Blender and the Google Spatial Media module are used together to immerse users in data exploration. Several examples employing these methods exhibit how the technique works using different types of astronomical data.
Dynamic interactions between visual working memory and saccade target selection
Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew
2014-01-01
Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628
FUn: a framework for interactive visualizations of large, high-dimensional datasets on the web.
Probst, Daniel; Reymond, Jean-Louis
2018-04-15
During the past decade, big data have become a major tool in scientific endeavors. Although statistical methods and algorithms are well-suited for analyzing and summarizing enormous amounts of data, the results do not allow for a visual inspection of the entire data. Current scientific software, including R packages and Python libraries such as ggplot2, matplotlib and plot.ly, do not support interactive visualizations of datasets exceeding 100 000 data points on the web. Other solutions enable the web-based visualization of big data only through data reduction or statistical representations. However, recent hardware developments, especially advancements in graphical processing units, allow for the rendering of millions of data points on a wide range of consumer hardware such as laptops, tablets and mobile phones. Similar to the challenges and opportunities brought to virtually every scientific field by big data, both the visualization of and interaction with copious amounts of data are both demanding and hold great promise. Here we present FUn, a framework consisting of a client (Faerun) and server (Underdark) module, facilitating the creation of web-based, interactive 3D visualizations of large datasets, enabling record level visual inspection. We also introduce a reference implementation providing access to SureChEMBL, a database containing patent information on more than 17 million chemical compounds. The source code and the most recent builds of Faerun and Underdark, Lore.js and the data preprocessing toolchain used in the reference implementation, are available on the project website (http://doc.gdb.tools/fun/). daniel.probst@dcb.unibe.ch or jean-louis.reymond@dcb.unibe.ch.
Interactive Multimedia Module with Pedagogical Agents: Formative Evaluation
ERIC Educational Resources Information Center
Lee, Tien Tien; Osman, Kamisah
2012-01-01
Electrochemistry is found to be a difficult topic to learn due to its abstract concepts that involve three representation levels. Research showed that animation and simulation using Information and Communication Technology can help students to visualize and thus enhance students' understanding in learning abstract chemistry topics. As a result, an…
Exploratory Visualization of Graphs Based on Community Structure
ERIC Educational Resources Information Center
Liu, Yujie
2013-01-01
Communities, also called clusters or modules, are groups of nodes which probably share common properties and/or play similar roles within a graph. They widely exist in real networks such as biological, social, and information networks. Allowing users to interactively browse and explore the community structure, which is essential for understanding…
Using Simulation Module, PCLAB, for Steady State Disturbance Sensitivity Analysis in Process Control
ERIC Educational Resources Information Center
Ali, Emad; Idriss, Arimiyawo
2009-01-01
Recently, chemical engineering education moves towards utilizing simulation soft wares to enhance the learning process especially in the field of process control. These training simulators provide interactive learning through visualization and practicing which will bridge the gap between the theoretical abstraction of textbooks and the…
Visualizing and Understanding Probability and Statistics: Graphical Simulations Using Excel
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
2009-01-01
The authors describe a collection of dynamic interactive simulations for teaching and learning most of the important ideas and techniques of introductory statistics and probability. The modules cover such topics as randomness, simulations of probability experiments such as coin flipping, dice rolling and general binomial experiments, a simulation…
Brightness and transparency in the early visual cortex.
Salmela, Viljami R; Vanni, Simo
2013-06-24
Several psychophysical studies have shown that transparency can have drastic effects on brightness and lightness. However, the neural processes generating these effects have remained unresolved. Several lines of evidence suggest that the early visual cortex is important for brightness perception. While single cell recordings suggest that surface brightness is represented in the primary visual cortex, the results of functional magnetic resonance imaging (fMRI) studies have been discrepant. In addition, the location of the neural representation of transparency is not yet known. We investigated whether the fMRI responses in areas V1, V2, and V3 correlate with brightness and transparency. To dissociate the blood oxygen level-dependent (BOLD) response to brightness from the response to local border contrast and mean luminance, we used variants of White's brightness illusion, both opaque and transparent, in which luminance increments and decrements cancel each other out. The stimuli consisted of a target surface and a surround. The surround luminance was always sinusoidally modulated at 0.5 Hz to induce brightness modulation to the target. The target luminance was constant or modulated in counterphase to null brightness modulation. The mean signal changes were calculated from the voxels in V1, V2, and V3 corresponding to the retinotopic location of the target surface. The BOLD responses were significantly stronger for modulating brightness than for stimuli with constant brightness. In addition, the responses were stronger for transparent than for opaque stimuli, but there was more individual variation. No interaction between brightness and transparency was found. The results show that the early visual areas V1-V3 are sensitive to surface brightness and transparency and suggest that brightness and transparency are represented separately.
Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako
2014-10-17
Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts.
Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako
2014-01-01
Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts. PMID:25322936
iCanPlot: Visual Exploration of High-Throughput Omics Data Using Interactive Canvas Plotting
Sinha, Amit U.; Armstrong, Scott A.
2012-01-01
Increasing use of high throughput genomic scale assays requires effective visualization and analysis techniques to facilitate data interpretation. Moreover, existing tools often require programming skills, which discourages bench scientists from examining their own data. We have created iCanPlot, a compelling platform for visual data exploration based on the latest technologies. Using the recently adopted HTML5 Canvas element, we have developed a highly interactive tool to visualize tabular data and identify interesting patterns in an intuitive fashion without the need of any specialized computing skills. A module for geneset overlap analysis has been implemented on the Google App Engine platform: when the user selects a region of interest in the plot, the genes in the region are analyzed on the fly. The visualization and analysis are amalgamated for a seamless experience. Further, users can easily upload their data for analysis—which also makes it simple to share the analysis with collaborators. We illustrate the power of iCanPlot by showing an example of how it can be used to interpret histone modifications in the context of gene expression. PMID:22393367
MINE: Module Identification in Networks
2011-01-01
Background Graphical models of network associations are useful for both visualizing and integrating multiple types of association data. Identifying modules, or groups of functionally related gene products, is an important challenge in analyzing biological networks. However, existing tools to identify modules are insufficient when applied to dense networks of experimentally derived interaction data. To address this problem, we have developed an agglomerative clustering method that is able to identify highly modular sets of gene products within highly interconnected molecular interaction networks. Results MINE outperforms MCODE, CFinder, NEMO, SPICi, and MCL in identifying non-exclusive, high modularity clusters when applied to the C. elegans protein-protein interaction network. The algorithm generally achieves superior geometric accuracy and modularity for annotated functional categories. In comparison with the most closely related algorithm, MCODE, the top clusters identified by MINE are consistently of higher density and MINE is less likely to designate overlapping modules as a single unit. MINE offers a high level of granularity with a small number of adjustable parameters, enabling users to fine-tune cluster results for input networks with differing topological properties. Conclusions MINE was created in response to the challenge of discovering high quality modules of gene products within highly interconnected biological networks. The algorithm allows a high degree of flexibility and user-customisation of results with few adjustable parameters. MINE outperforms several popular clustering algorithms in identifying modules with high modularity and obtains good overall recall and precision of functional annotations in protein-protein interaction networks from both S. cerevisiae and C. elegans. PMID:21605434
Lateral interactions in the outer retina
Thoreson, Wallace B.; Mangel, Stuart C.
2012-01-01
Lateral interactions in the outer retina, particularly negative feedback from horizontal cells to cones and direct feed-forward input from horizontal cells to bipolar cells, play a number of important roles in early visual processing, such as generating center-surround receptive fields that enhance spatial discrimination. These circuits may also contribute to post-receptoral light adaptation and the generation of color opponency. In this review, we examine the contributions of horizontal cell feedback and feed-forward pathways to early visual processing. We begin by reviewing the properties of bipolar cell receptive fields, especially with respect to modulation of the bipolar receptive field surround by the ambient light level and to the contribution of horizontal cells to the surround. We then review evidence for and against three proposed mechanisms for negative feedback from horizontal cells to cones: 1) GABA release by horizontal cells, 2) ephaptic modulation of the cone pedicle membrane potential generated by currents flowing through hemigap junctions in horizontal cell dendrites, and 3) modulation of cone calcium currents (ICa) by changes in synaptic cleft proton levels. We also consider evidence for the presence of direct horizontal cell feed-forward input to bipolar cells and discuss a possible role for GABA at this synapse. We summarize proposed functions of horizontal cell feedback and feed-forward pathways. Finally, we examine the mechanisms and functions of two other forms of lateral interaction in the outer retina: negative feedback from horizontal cells to rods and positive feedback from horizontal cells to cones. PMID:22580106
Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G
2017-08-16
Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.
PBEQ-Solver for online visualization of electrostatic potential of biomolecules.
Jo, Sunhwan; Vargyas, Miklos; Vasko-Szedlar, Judit; Roux, Benoît; Im, Wonpil
2008-07-01
PBEQ-Solver provides a web-based graphical user interface to read biomolecular structures, solve the Poisson-Boltzmann (PB) equations and interactively visualize the electrostatic potential. PBEQ-Solver calculates (i) electrostatic potential and solvation free energy, (ii) protein-protein (DNA or RNA) electrostatic interaction energy and (iii) pKa of a selected titratable residue. All the calculations can be performed in both aqueous solvent and membrane environments (with a cylindrical pore in the case of membrane). PBEQ-Solver uses the PBEQ module in the biomolecular simulation program CHARMM to solve the finite-difference PB equation of molecules specified by users. Users can interactively inspect the calculated electrostatic potential on the solvent-accessible surface as well as iso-electrostatic potential contours using a novel online visualization tool based on MarvinSpace molecular visualization software, a Java applet integrated within CHARMM-GUI (http://www.charmm-gui.org). To reduce the computational time on the server, and to increase the efficiency in visualization, all the PB calculations are performed with coarse grid spacing (1.5 A before and 1 A after focusing). PBEQ-Solver suggests various physical parameters for PB calculations and users can modify them if necessary. PBEQ-Solver is available at http://www.charmm-gui.org/input/pbeqsolver.
Reduced response cluster size in early visual areas explains the acuity deficit in amblyopia.
Huang, Yufeng; Feng, Lixia; Zhou, Yifeng
2017-05-03
Focal visual stimulation typically results in the activation of a large portion of the early visual cortex. This spread of activity is attributed to long-range lateral interactions. Such long-range interactions may serve to stabilize a visual representation or to simply modulate incoming signals, and any associated dysfunction in long-range activation may reduce sensitivity to visual information in conditions such as amblyopia. We sought to measure the dispersion of cortical activity following local visual stimulation in a group of patients with amblyopia and matched normal. Twenty adult anisometropic amblyopes and 10 normal controls participated in this study. Using a multifocal stimulation, we simultaneously measured cluster sizes to multiple stimulation points in the visual field. We found that the functional MRI (fMRI) response cluster size that corresponded to the fellow eye was significantly larger as opposed to that corresponding to the amblyopic eye and that the fMRI response cluster size at the two more central retinotopic locations correlated with amblyopia acuity deficit. Our results suggest that the amblyopic visual cortex has a diminished long-range communication as evidenced by significantly smaller cluster of activity as measured with fMRI. These results have important implications for models of amblyopia and approaches to treatment.
Balconi, Michela; Vanutelli, Maria Elide
2016-01-01
The brain activity, considered in its hemodynamic (optical imaging: functional Near-Infrared Spectroscopy, fNIRS) and electrophysiological components (event-related potentials, ERPs, N200) was monitored when subjects observed (visual stimulation, V) or observed and heard (visual + auditory stimulation, VU) situations which represented inter-species (human-animal) interactions, with an emotional positive (cooperative) or negative (uncooperative) content. In addition, the cortical lateralization effect (more left or right dorsolateral prefrontal cortex, DLPFC) was explored. Both ERP and fNIRS showed significant effects due to emotional interactions which were discussed at light of cross-modal integration effects. The significance of inter-species effect for the emotional behavior was considered. In addition, hemodynamic and EEG consonant results and their value as integrated measures were discussed at light of valence effect. PMID:26976052
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.
Devlin, Joseph C; Battaglia, Thomas; Blaser, Martin J; Ruggles, Kelly V
2018-06-25
Exploration of large data sets, such as shotgun metagenomic sequence or expression data, by biomedical experts and medical professionals remains as a major bottleneck in the scientific discovery process. Although tools for this purpose exist for 16S ribosomal RNA sequencing analysis, there is a growing but still insufficient number of user-friendly interactive visualization workflows for easy data exploration and figure generation. The development of such platforms for this purpose is necessary to accelerate and streamline microbiome laboratory research. We developed the Workflow Hub for Automated Metagenomic Exploration (WHAM!) as a web-based interactive tool capable of user-directed data visualization and statistical analysis of annotated shotgun metagenomic and metatranscriptomic data sets. WHAM! includes exploratory and hypothesis-based gene and taxa search modules for visualizing differences in microbial taxa and gene family expression across experimental groups, and for creating publication quality figures without the need for command line interface or in-house bioinformatics. WHAM! is an interactive and customizable tool for downstream metagenomic and metatranscriptomic analysis providing a user-friendly interface allowing for easy data exploration by microbiome and ecological experts to facilitate discovery in multi-dimensional and large-scale data sets.
Lee, S W; Jeong, B S; Choi, J; Kim, J-W
2015-01-01
Men tend to have greater positive responses than women to explicit visual erotic stimuli (EVES). However, it remains unclear, which brain network makes men more sensitive to EVES and which factors contribute to the brain network activity. In this study, we aimed to assess the effect of sex difference on brain connectivity patterns by EVES. We also investigated the association of testosterone with brain connection that showed the effects of sex difference. During functional magnetic resonance imaging scans, 14 males and 14 females were asked to see alternating blocks of pictures that were either erotic or non-erotic. Psychophysiological interaction analysis was performed to investigate the functional connectivity of the nucleus accumbens (NA) as it related to EVES. Men showed significantly greater EVES-specific functional connection between the right NA and the right lateral occipital cortex (LOC). In addition, the right NA and the right LOC network activity was positively correlated with the plasma testosterone level in men. Our results suggest that the reason men are sensitive to EVES is the increased interaction in the visual reward networks, which is modulated by their plasma testosterone level.
Paneri, Sofia; Gregoriou, Georgia G.
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784
Touch influences perceived gloss
Adams, Wendy J.; Kerrigan, Iona S.; Graf, Erich W.
2016-01-01
Identifying an object’s material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction – slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception – a sensible strategy given the ambiguity of visual clues to gloss. PMID:26915492
Paneri, Sofia; Gregoriou, Georgia G
2017-01-01
The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.
Cortical interactions in vision and awareness: hierarchies in reverse.
Juan, Chi-Hung; Campana, Gianluca; Walsh, Vincent
2004-01-01
The anatomical connections between visual areas can be organized in 'feedforward', 'feedback' or 'horizontal' laminar patterns. We report here four experiments that test the function of some of the feedback projections in visual cortex. Projections from V5 to V1 have been suggested to be important in visual awareness, and in the first experiment we show this to be the case in the blindsight patient GY. This demonstration is replicated, in principle, in the second experiment and we also show the timing of the V5-V1 interaction to correspond to findings from single unit physiology. In the third experiment we show that V1 is important for stimulus detection in visual search arrays and that the timing of V1 interference with TMS is late (up to 240 ms after the onset of the visual array). Finally we report an experiment showing that the parietal cortex is not involved in visual motion priming, whereas V5 is, suggesting that the parietal cortex does not modulate V5 in this task. We interpret the data in terms of Bullier's recent physiological recordings and Ahissar and Hochstein's reverse hierarchy theory of vision.
Lieberman, Gillian; Abramson, Richard; Volkan, Kevin; McArdle, Patricia J
2002-01-01
This study compared the educational effectiveness of an interactive tutorial with that of interactive computer-assisted instruction (CAI) and determined the effects of personal preference, learning style, and level of training. Fifty-four medical students and four radiology residents were prospectively, randomly assigned to receive instruction from different sections of an interactive tutorial and an interactive CAI module. Participants took tests of factual knowledge at the beginning and end of the instruction and a test of visual diagnosis at the end. They completed questionnaires to evaluate their preferred learning styles objectively and to elicit their subjective attitudes toward the two formats. Mean test scores of the tutorial and CAI groups were compared by means of analysis of covariance and two-tailed repeated-measures F test. Both the tutorial and CAI groups demonstrated significant improvement in posttest scores (P < .01 and P < .01, respectively) with the tutorial group's mean posttest score marginally but significantly higher (32.84 vs 28.13, P < .001). There were no significant interaction effects with participants' year of training (P = .845), objectively evaluated preferred learning style (P = .312), subjectively elicited attitude toward learning with CAI (P = .703), or visual diagnosis score (tutorial, 7.61; CD-ROM, 7.75; P = .79). Interactive tutorial and optimal CAI are both effective instructional formats. The tutorial was marginally but significantly more effective at teaching factual knowledge, an effect unrelated to students' year of training, learning style, or stated enjoyment of CAI. The superiority of the tutorial is expected to increase when it is compared with commercially expedient CAI modules.
Accelerating the use of molecular modeling in the high school classroom with VMD Lite.
Lundquist, Karl; Herndon, Conner; Harty, Tyson H; Gumbart, James C
2016-01-01
It is often difficult for students to develop an intuition about molecular processes, which occur in a realm far different from day-to-day life. For example, thermal fluctuations take on hurricane-like proportions at the molecular scale. Students need a way to visualize realistic depictions of molecular processes to appreciate them. To this end, we have developed a simplified graphical interface to the widely used molecular visualization and analysis tool Visual Molecular Dynamics (VMD) called VMD lite. We demonstrate the use of VMD lite through a module on diffusion and the hydrophobic effect as they relate to membrane formation. Trajectories from molecular dynamics simulations, which students can interact with freely, illustrate the dynamical behavior of lipid molecules and water. VMD lite was tested by ∼70 students with overall positive reception. Remaining deficiencies in conceptual understanding were noted, however, and the module has been revised in response. © 2016 The International Union of Biochemistry and Molecular Biology.
Technology-Enhanced Learning in Science (TELS)
NASA Astrophysics Data System (ADS)
Linn, Marcia
2006-12-01
The overall research question addressed by the NSF-funded echnologyEnhanced Learning in Science (TELS) Center is whether interactive scientific visualizations embedded in high quality instructional units can be used to increase pre-college student learning in science. The research draws on the knowledge integration framework to guide the design of instructional modules, professional development activities, and assessment activities. This talk reports on results from the first year where 50 teachers taught one of the 12 TELS modules in over 200 classes in 16 diverse schools. Assessments scored with the knowledge integration rubric showed that students made progress in learning complex physics topics such as electricity, mechanics, and thermodynamics. Teachers encountered primarily technological obstacles that the research team was able to address prior to implementation. Powerful scientific visualizations required extensive instructional supports to communicate to students. Currently, TELS is refining the modules, professional development, and assessments based on evidence from the first year. Preliminary design principles intended to help research teams build on the findings will be presented for audience feedback and discussion.
Task-dependent recurrent dynamics in visual cortex
Tajima, Satohiro; Koida, Kowa; Tajima, Chihiro I; Suzuki, Hideyuki; Aihara, Kazuyuki; Komatsu, Hidehiko
2017-01-01
The capacity for flexible sensory-action association in animals has been related to context-dependent attractor dynamics outside the sensory cortices. Here, we report a line of evidence that flexibly modulated attractor dynamics during task switching are already present in the higher visual cortex in macaque monkeys. With a nonlinear decoding approach, we can extract the particular aspect of the neural population response that reflects the task-induced emergence of bistable attractor dynamics in a neural population, which could be obscured by standard unsupervised dimensionality reductions such as PCA. The dynamical modulation selectively increases the information relevant to task demands, indicating that such modulation is beneficial for perceptual decisions. A computational model that features nonlinear recurrent interaction among neurons with a task-dependent background input replicates the key properties observed in the experimental data. These results suggest that the context-dependent attractor dynamics involving the sensory cortex can underlie flexible perceptual abilities. DOI: http://dx.doi.org/10.7554/eLife.26868.001 PMID:28737487
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
Visual communication stimulates reproduction in Nile tilapia, Oreochromis niloticus (L.).
Castro, A L S; Gonçalves-de-Freitas, E; Volpato, G L; Oliveira, C
2009-04-01
Reproductive fish behavior is affected by male-female interactions that stimulate physiological responses such as hormonal release and gonad development. During male-female interactions, visual and chemical communication can modulate fish reproduction. The aim of the present study was to test the effect of visual and chemical male-female interaction on the gonad development and reproductive behavior of the cichlid fish Nile tilapia, Oreochromis niloticus (L.). Fifty-six pairs were studied after being maintained for 5 days under one of the four conditions (N = 14 for each condition): 1) visual contact (V); 2) chemical contact (Ch); 3) chemical and visual contact (Ch+V); 4) no sensory contact (Iso) - males and females isolated. We compared the reproductive behavior (nesting, courtship and spawning) and gonadosomatic index (GSI) of pairs of fish under all four conditions. Visual communication enhanced the frequency of courtship in males (mean +/- SEM; V: 24.79 +/- 3.30, Ch+V: 20.74 +/- 3.09, Ch: 0.1 +/- 0.07, Iso: 4.68 +/- 1.26 events/30 min; P < 0.05, two-way ANOVA with LSD post hoc test), induced spawning in females (3 spawning in V and also 3 in Ch+V condition), and increased GSI in males (mean +/- SEM; V: 1.39 +/- 0.08, Ch+V: 1.21 +/- 0.08, Ch: 1.04 +/- 0.07, Iso: 0.82 +/- 0.07%; P < 0.05, two-way ANOVA with LSD post hoc test). Chemical communication did not affect the reproductive behavior of pairs nor did it enhance the effects of visual contact. Therefore, male-female visual communication is an effective cue, which stimulates reproduction among pairs of Nile tilapia.
Network interactions underlying mirror feedback in stroke: A dynamic causal modeling study.
Saleh, Soha; Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei; Tunik, Eugene
2017-01-01
Mirror visual feedback (MVF) is potentially a powerful tool to facilitate recovery of disordered movement and stimulate activation of under-active brain areas due to stroke. The neural mechanisms underlying MVF have therefore been a focus of recent inquiry. Although it is known that sensorimotor areas can be activated via mirror feedback, the network interactions driving this effect remain unknown. The aim of the current study was to fill this gap by using dynamic causal modeling to test the interactions between regions in the frontal and parietal lobes that may be important for modulating the activation of the ipsilesional motor cortex during mirror visual feedback of unaffected hand movement in stroke patients. Our intent was to distinguish between two theoretical neural mechanisms that might mediate ipsilateral activation in response to mirror-feedback: transfer of information between bilateral motor cortices versus recruitment of regions comprising an action observation network which in turn modulate the motor cortex. In an event-related fMRI design, fourteen chronic stroke subjects performed goal-directed finger flexion movements with their unaffected hand while observing real-time visual feedback of the corresponding (veridical) or opposite (mirror) hand in virtual reality. Among 30 plausible network models that were tested, the winning model revealed significant mirror feedback-based modulation of the ipsilesional motor cortex arising from the contralesional parietal cortex, in a region along the rostral extent of the intraparietal sulcus. No winning model was identified for the veridical feedback condition. We discuss our findings in the context of supporting the latter hypothesis, that mirror feedback-based activation of motor cortex may be attributed to engagement of a contralateral (contralesional) action observation network. These findings may have important implications for identifying putative cortical areas, which may be targeted with non-invasive brain stimulation as a means of potentiating the effects of mirror training.
Methods and Apparatus for Autonomous Robotic Control
NASA Technical Reports Server (NTRS)
Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)
2017-01-01
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
ePlant and the 3D data display initiative: integrative systems biology on the world wide web.
Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J
2011-01-10
Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).
Chen, Chih-Yang; Tian, Xiaoguang; Idrees, Saad; Münch, Thomas A.
2017-01-01
Microsaccades occur during gaze fixation to correct for miniscule foveal motor errors. The mechanisms governing such fine oculomotor control are still not fully understood. In this study, we explored microsaccade control by analyzing the impacts of transient visual stimuli on these movements’ kinematics. We found that such kinematics can be altered in systematic ways depending on the timing and spatial geometry of visual transients relative to the movement goals. In two male rhesus macaques, we presented peripheral or foveal visual transients during an otherwise stable period of fixation. Such transients resulted in well-known reductions in microsaccade frequency, and our goal was to investigate whether microsaccade kinematics would additionally be altered. We found that both microsaccade timing and amplitude were modulated by the visual transients, and in predictable manners by these transients’ timing and geometry. Interestingly, modulations in the peak velocity of the same movements were not proportional to the observed amplitude modulations, suggesting a violation of the well-known “main sequence” relationship between microsaccade amplitude and peak velocity. We hypothesize that visual stimulation during movement preparation affects not only the saccadic “Go” system driving eye movements but also a “Pause” system inhibiting them. If the Pause system happens to be already turned off despite the new visual input, movement kinematics can be altered by the readout of additional visually evoked spikes in the Go system coding for the flash location. Our results demonstrate precise control over individual microscopic saccades and provide testable hypotheses for mechanisms of saccade control in general. NEW & NOTEWORTHY Microsaccadic eye movements play an important role in several aspects of visual perception and cognition. However, the mechanisms for microsaccade control are still not fully understood. We found that microsaccade kinematics can be altered in a systematic manner by visual transients, revealing a previously unappreciated and exquisite level of control by the oculomotor system of even the smallest saccades. Our results suggest precise temporal interaction between visual, motor, and inhibitory signals in microsaccade control. PMID:28202573
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
Space station architectural elements model study. Space station human factors research review
NASA Technical Reports Server (NTRS)
Taylor, Thomas C.; Khan, Eyoub; Spencer, John; Rocha, Carlos; Cliffton, Ethan Wilson
1987-01-01
Presentation visuals and an extended abstract represent a study to explore and analyze the interaction of major utilities distribution, generic workstation, and spatial composition of the SPACEHAB space station module. Issues addressed include packing densities vs. circulation, efficiency of packing vs. standardization, flexibility vs. diversity, and composition of interior volume as space for living vs. residual negative volume. The result of the study is expected to be a series of observations and preliminary evaluation criteria which focus on the productive living environment for a module in orbit.
Pavlidou, Anastasia; Schnitzler, Alfons; Lange, Joachim
2014-05-01
The neural correlates of action recognition have been widely studied in visual and sensorimotor areas of the human brain. However, the role of neuronal oscillations involved during the process of action recognition remains unclear. Here, we were interested in how the plausibility of an action modulates neuronal oscillations in visual and sensorimotor areas. Subjects viewed point-light displays (PLDs) of biomechanically plausible and implausible versions of the same actions. Using magnetoencephalography (MEG), we examined dynamic changes of oscillatory activity during these action recognition processes. While both actions elicited oscillatory activity in visual and sensorimotor areas in several frequency bands, a significant difference was confined to the beta-band (∼20 Hz). An increase of power for plausible actions was observed in left temporal, parieto-occipital and sensorimotor areas of the brain, in the beta-band in successive order between 1650 and 2650 msec. These distinct spatio-temporal beta-band profiles suggest that the action recognition process is modulated by the degree of biomechanical plausibility of the action, and that spectral power in the beta-band may provide a functional interaction between visual and sensorimotor areas in humans. Copyright © 2014 Elsevier Ltd. All rights reserved.
Functional Interaction Network Construction and Analysis for Disease Discovery.
Wu, Guanming; Haw, Robin
2017-01-01
Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.
atBioNet--an integrated network analysis tool for genomics and biomarker discovery.
Ding, Yijun; Chen, Minjun; Liu, Zhichao; Ding, Don; Ye, Yanbin; Zhang, Min; Kelly, Reagan; Guo, Li; Su, Zhenqiang; Harris, Stephen C; Qian, Feng; Ge, Weigong; Fang, Hong; Xu, Xiaowei; Tong, Weida
2012-07-20
Large amounts of mammalian protein-protein interaction (PPI) data have been generated and are available for public use. From a systems biology perspective, Proteins/genes interactions encode the key mechanisms distinguishing disease and health, and such mechanisms can be uncovered through network analysis. An effective network analysis tool should integrate different content-specific PPI databases into a comprehensive network format with a user-friendly platform to identify key functional modules/pathways and the underlying mechanisms of disease and toxicity. atBioNet integrates seven publicly available PPI databases into a network-specific knowledge base. Knowledge expansion is achieved by expanding a user supplied proteins/genes list with interactions from its integrated PPI network. The statistically significant functional modules are determined by applying a fast network-clustering algorithm (SCAN: a Structural Clustering Algorithm for Networks). The functional modules can be visualized either separately or together in the context of the whole network. Integration of pathway information enables enrichment analysis and assessment of the biological function of modules. Three case studies are presented using publicly available disease gene signatures as a basis to discover new biomarkers for acute leukemia, systemic lupus erythematosus, and breast cancer. The results demonstrated that atBioNet can not only identify functional modules and pathways related to the studied diseases, but this information can also be used to hypothesize novel biomarkers for future analysis. atBioNet is a free web-based network analysis tool that provides a systematic insight into proteins/genes interactions through examining significant functional modules. The identified functional modules are useful for determining underlying mechanisms of disease and biomarker discovery. It can be accessed at: http://www.fda.gov/ScienceResearch/BioinformaticsTools/ucm285284.htm.
Raudies, Florian; Hasselmo, Michael E.
2015-01-01
Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432
ERIC Educational Resources Information Center
Osman, Kamisah; Lee, Tien Tien
2014-01-01
The Electrochemistry topic is found to be difficult to learn due to its abstract concepts involving macroscopic, microscopic, and symbolic representation levels. Studies have shown that animation and simulation using information and communication technology (ICT) can help students to visualize and hence enhance their understanding in learning…
ERIC Educational Resources Information Center
Lee, Tien Tien; Osman, Kamisah
2011-01-01
Electrochemistry is found to be a difficult topic to learn due to its abstract concepts that involve the macroscopic, microscopic and symbolic representation levels. Research showed that animation and simulation using Information and Communication Technology (ICT) can help students to visualize and hence enhance students' understanding in learning…
EMSA Analysis of DNA Binding By Rgg Proteins
LaSarre, Breah; Federle, Michael J.
2016-01-01
In bacteria, interaction of various proteins with DNA is essential for the regulation of specific target gene expression. Electrophoretic mobility shift assay (EMSA) is an in vitro approach allowing for the visualization of these protein-DNA interactions. Rgg proteins comprise a family of transcriptional regulators widespread among the Firmicutes. Some of these proteins function independently to regulate target gene expression, while others have now been demonstrated to function as effectors of cell-to-cell communication, having regulatory activities that are modulated via direct interaction with small signaling peptides. EMSA analysis can be used to assess DNA binding of either type of Rgg protein. EMSA analysis of Rgg protein activity has facilitated in vitro confirmation of regulatory targets, identification of precise DNA binding sites via DNA probe mutagenesis, and characterization of the mechanism by which some cognate signaling peptides modulate Rgg protein function (e.g. interruption of DNA-binding in some cases). PMID:27430004
EMSA Analysis of DNA Binding By Rgg Proteins.
LaSarre, Breah; Federle, Michael J
2013-08-20
In bacteria, interaction of various proteins with DNA is essential for the regulation of specific target gene expression. Electrophoretic mobility shift assay (EMSA) is an in vitro approach allowing for the visualization of these protein-DNA interactions. Rgg proteins comprise a family of transcriptional regulators widespread among the Firmicutes. Some of these proteins function independently to regulate target gene expression, while others have now been demonstrated to function as effectors of cell-to-cell communication, having regulatory activities that are modulated via direct interaction with small signaling peptides. EMSA analysis can be used to assess DNA binding of either type of Rgg protein. EMSA analysis of Rgg protein activity has facilitated in vitro confirmation of regulatory targets, identification of precise DNA binding sites via DNA probe mutagenesis, and characterization of the mechanism by which some cognate signaling peptides modulate Rgg protein function ( e.g. interruption of DNA-binding in some cases).
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.
Jung, Sang-Kyu; McDonald, Karen
2011-08-16
Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.
Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization
2011-01-01
Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353
CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks.
Li, Min; Li, Dongyan; Tang, Yu; Wu, Fangxiang; Wang, Jianxin
2017-08-31
Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster.
CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks
Li, Min; Li, Dongyan; Tang, Yu; Wang, Jianxin
2017-01-01
Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster. PMID:28858211
MacLean, Mary H; Giesbrecht, Barry
2015-07-01
Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.
BioSIGHT: Interactive Visualization Modules for Science Education
NASA Technical Reports Server (NTRS)
Wong, Wee Ling
1998-01-01
Redefining science education to harness emerging integrated media technologies with innovative pedagogical goals represents a unique challenge. The Integrated Media Systems Center (IMSC) is the only engineering research center in the area of multimedia and creative technologies sponsored by the National Science Foundation. The research program at IMSC is focused on developing advanced technologies that address human-computer interfaces, database management, and high- speed network capabilities. The BioSIGHT project at IMSC is a demonstration technology project in the area of education that seeks to address how such emerging multimedia technologies can make an impact on science education. The scope of this project will help solidify NASA's commitment for the development of innovative educational resources that promotes science literacy for our students and the general population as well. These issues must be addressed as NASA marches towards the goal of enabling human space exploration that requires an understanding of life sciences in space. The IMSC BioSIGHT lab was established with the purpose of developing a novel methodology that will map a high school biology curriculum into a series of interactive visualization modules that can be easily incorporated into a space biology curriculum. Fundamental concepts in general biology must be mastered in order to allow a better understanding and application for space biology. Interactive visualization is a powerful component that can capture the students' imagination, facilitate their assimilation of complex ideas, and help them develop integrated views of biology. These modules will augment the role of the teacher and will establish the value of student-centered interactivity, both in an individual setting as well as in a collaborative learning environment. Students will be able to interact with the content material, explore new challenges, and perform virtual laboratory simulations. The BioSIGHT effort is truly cross-disciplinary in nature and requires expertise from many areas including Biology, Computer Science, Electrical Engineering, Education, and the Cognitive Sciences. The BioSIGHT team includes a scientific illustrator, educational software designer, computer programmers as well as IMSC graduate and undergraduate students. Our collaborators include TERC, a research and education organization with extensive k-12 math and science curricula development from Cambridge, MA.; SRI International of Menlo Park, CA.; teachers and students from local area high schools (Newbury Park High School, USC's Family of Five schools, Chadwick School, and Pasadena Polytechnic High School).
Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.
Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I
2014-02-12
Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.
Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices
Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.
2014-01-01
Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543
Simulation environment and graphical visualization environment: a COPD use-case.
Huertas-Migueláñez, Mercedes; Mora, Daniel; Cano, Isaac; Maier, Dieter; Gomez-Cabrero, David; Lluch-Ariet, Magí; Miralles, Felip
2014-11-28
Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios.
Bai, Gaobo; Zheng, Wenling; Ma, Wenli
2018-05-01
Hepatitis C virus (HCV)-induced human hepatocellular carcinoma (HCC) progression may be due to a complex multi-step processes. The developmental mechanism of these processes is worth investigating for the prevention, diagnosis and therapy of HCC. The aim of the present study was to investigate the molecular mechanism underlying the progression of HCV-induced hepatocarcinogenesis. First, the dynamic gene module, consisting of key genes associated with progression between the normal stage and HCC, was identified using the Weighted Gene Co-expression Network Analysis tool from R language. By defining those genes in the module as seeds, the change of co-expression in differentially expressed gene sets in two consecutive stages of pathological progression was examined. Finally, interaction pairs of HCV viral proteins and their directly targeted proteins in the identified module were extracted from the literature and a comprehensive interaction dataset from yeast two-hybrid experiments. By combining the interactions between HCV and their targets, and protein-protein interactions in the Search Tool for the Retrieval of Interacting Genes database (STRING), the HCV-key genes interaction network was constructed and visualized using Cytoscape software 3.2. As a result, a module containing 44 key genes was identified to be associated with HCC progression, due to the dynamic features and functions of those genes in the module. Several important differentially co-expressed gene pairs were identified between non-HCC and HCC stages. In the key genes, cyclin dependent kinase 1 (CDK1), NDC80, cyclin A2 (CCNA2) and rac GTPase activating protein 1 (RACGAP1) were shown to be targeted by the HCV nonstructural proteins NS5A, NS3 and NS5B, respectively. The four genes perform an intermediary role between the HCV viral proteins and the dysfunctional module in the HCV key genes interaction network. These findings provided valuable information for understanding the mechanism of HCV-induced HCC progression and for seeking drug targets for the therapy and prevention of HCC.
Schmidt, Christoph; Piper, Diana; Pester, Britta; Mierau, Andreas; Witte, Herbert
2018-05-01
Identification of module structure in brain functional networks is a promising way to obtain novel insights into neural information processing, as modules correspond to delineated brain regions in which interactions are strongly increased. Tracking of network modules in time-varying brain functional networks is not yet commonly considered in neuroscience despite its potential for gaining an understanding of the time evolution of functional interaction patterns and associated changing degrees of functional segregation and integration. We introduce a general computational framework for extracting consensus partitions from defined time windows in sequences of weighted directed edge-complete networks and show how the temporal reorganization of the module structure can be tracked and visualized. Part of the framework is a new approach for computing edge weight thresholds for individual networks based on multiobjective optimization of module structure quality criteria as well as an approach for matching modules across time steps. By testing our framework using synthetic network sequences and applying it to brain functional networks computed from electroencephalographic recordings of healthy subjects that were exposed to a major balance perturbation, we demonstrate the framework's potential for gaining meaningful insights into dynamic brain function in the form of evolving network modules. The precise chronology of the neural processing inferred with our framework and its interpretation helps to improve the currently incomplete understanding of the cortical contribution for the compensation of such balance perturbations.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Bonaccorsi, Joyce; Cenni, Maria Cristina; Sale, Alessandro; Maffei, Lamberto
2012-01-01
Loss of visual acuity caused by abnormal visual experience during development (amblyopia) is an untreatable pathology in adults. In some occasions, amblyopic patients loose vision in their better eye owing to accidents or illnesses. While this condition is relevant both for its clinical importance and because it represents a case in which binocular interactions in the visual cortex are suppressed, it has scarcely been studied in animal models. We investigated whether exposure to environmental enrichment (EE) is effective in triggering recovery of vision in adult amblyopic rats rendered monocular by optic nerve dissection in their normal eye. By employing both electrophysiological and behavioral assessments, we found a full recovery of visual acuity in enriched rats compared to controls reared in standard conditions. Moreover, we report that EE modulates the expression of GAD67 and BDNF. The non invasive nature of EE renders this paradigm promising for amblyopia therapy in adult monocular people. PMID:22509358
The role of visualization in learning from computer-based images
NASA Astrophysics Data System (ADS)
Piburn, Michael D.; Reynolds, Stephen J.; McAuliffe, Carla; Leedy, Debra E.; Birk, James P.; Johnson, Julia K.
2005-05-01
Among the sciences, the practice of geology is especially visual. To assess the role of spatial ability in learning geology, we designed an experiment using: (1) web-based versions of spatial visualization tests, (2) a geospatial test, and (3) multimedia instructional modules built around QuickTime Virtual Reality movies. Students in control and experimental sections were administered measures of spatial orientation and visualization, as well as a content-based geospatial examination. All subjects improved significantly in their scores on spatial visualization and the geospatial examination. There was no change in their scores on spatial orientation. A three-way analysis of variance, with the geospatial examination as the dependent variable, revealed significant main effects favoring the experimental group and a significant interaction between treatment and gender. These results demonstrate that spatial ability can be improved through instruction, that learning of geological content will improve as a result, and that differences in performance between the genders can be eliminated.
Krajcovicova, Lenka; Barton, Marek; Elfmarkova-Nemcova, Nela; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2017-12-01
Visual processing difficulties are often present in Alzheimer's disease (AD), even in its pre-dementia phase (i.e. in mild cognitive impairment, MCI). The default mode network (DMN) modulates the brain connectivity depending on the specific cognitive demand, including visual processes. The aim of the present study was to analyze specific changes in connectivity of the posterior DMN node (i.e. the posterior cingulate cortex and precuneus, PCC/P) associated with visual processing in 17 MCI patients and 15 AD patients as compared to 18 healthy controls (HC) using functional magnetic resonance imaging. We used psychophysiological interaction (PPI) analysis to detect specific alterations in PCC connectivity associated with visual processing while controlling for brain atrophy. In the HC group, we observed physiological changes in PCC connectivity in ventral visual stream areas and with PCC/P during the visual task, reflecting the successful involvement of these regions in visual processing. In the MCI group, the PCC connectivity changes were disturbed and remained significant only with the anterior precuneus. In between-group comparison, we observed significant PPI effects in the right superior temporal gyrus in both MCI and AD as compared to HC. This change in connectivity may reflect ineffective "compensatory" mechanism present in the early pre-dementia stages of AD or abnormal modulation of brain connectivity due to the disease pathology. With the disease progression, these changes become more evident but less efficient in terms of compensation. This approach can separate the MCI from HC with 77% sensitivity and 89% specificity.
Enhancement of vision by monocular deprivation in adult mice.
Prusky, Glen T; Alam, Nazia M; Douglas, Robert M
2006-11-08
Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.
Zhang, Lelin; Chi, Yu Mike; Edelstein, Eve; Schulze, Jurgen; Gramann, Klaus; Velasquez, Alvaro; Cauwenberghs, Gert; Macagno, Eduardo
2010-01-01
Wireless physiological/neurological monitoring in virtual reality (VR) offers a unique opportunity for unobtrusively quantifying human responses to precisely controlled and readily modulated VR representations of health care environments. Here we present such a wireless, light-weight head-mounted system for measuring electrooculogram (EOG) and electroencephalogram (EEG) activity in human subjects interacting with and navigating in the Calit2 StarCAVE, a five-sided immersive 3-D visualization VR environment. The system can be easily expanded to include other measurements, such as cardiac activity and galvanic skin responses. We demonstrate the capacity of the system to track focus of gaze in 3-D and report a novel calibration procedure for estimating eye movements from responses to the presentation of a set of dynamic visual cues in the StarCAVE. We discuss cyber and clinical applications that include a 3-D cursor for visual navigation in VR interactive environments, and the monitoring of neurological and ocular dysfunction in vision/attention disorders.
Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena
2017-10-03
The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.
Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti
2017-05-10
Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner. Copyright © 2017 the authors 0270-6474/17/374942-12$15.00/0.
Early multisensory interactions affect the competition among multiple visual objects.
Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan
2011-04-01
In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.
2015-01-01
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
Temporally evolving gain mechanisms of attention in macaque area V4.
Sani, Ilaria; Santandrea, Elisa; Morrone, Maria Concetta; Chelazzi, Leonardo
2017-08-01
Cognitive attention and perceptual saliency jointly govern our interaction with the environment. Yet, we still lack a universally accepted account of the interplay between attention and luminance contrast, a fundamental dimension of saliency. We measured the attentional modulation of V4 neurons' contrast response functions (CRFs) in awake, behaving macaque monkeys and applied a new approach that emphasizes the temporal dynamics of cell responses. We found that attention modulates CRFs via different gain mechanisms during subsequent epochs of visually driven activity: an early contrast-gain, strongly dependent on prestimulus activity changes (baseline shift); a time-limited stimulus-dependent multiplicative modulation, reaching its maximal expression around 150 ms after stimulus onset; and a late resurgence of contrast-gain modulation. Attention produced comparable time-dependent attentional gain changes on cells heterogeneously coding contrast, supporting the notion that the same circuits mediate attention mechanisms in V4 regardless of the form of contrast selectivity expressed by the given neuron. Surprisingly, attention was also sometimes capable of inducing radical transformations in the shape of CRFs. These findings offer important insights into the mechanisms that underlie contrast coding and attention in primate visual cortex and a new perspective on their interplay, one in which time becomes a fundamental factor. NEW & NOTEWORTHY We offer an innovative perspective on the interplay between attention and luminance contrast in macaque area V4, one in which time becomes a fundamental factor. We place emphasis on the temporal dynamics of attentional effects, pioneering the notion that attention modulates contrast response functions of V4 neurons via the sequential engagement of distinct gain mechanisms. These findings advance understanding of attentional influences on visual processing and help reconcile divergent results in the literature. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen
2002-02-01
In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.
Park, Hyojin; Kayser, Christoph; Thut, Gregor; Gross, Joachim
2016-01-01
During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing. DOI: http://dx.doi.org/10.7554/eLife.14521.001 PMID:27146891
Accurate expectancies diminish perceptual distraction during visual search
Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry
2014-01-01
The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374
Multimodal representation of limb endpoint position in the posterior parietal cortex.
Shi, Ying; Apker, Gregory; Buneo, Christopher A
2013-04-01
Understanding the neural representation of limb position is important for comprehending the control of limb movements and the maintenance of body schema, as well as for the development of neuroprosthetic systems designed to replace lost limb function. Multiple subcortical and cortical areas contribute to this representation, but its multimodal basis has largely been ignored. Regarding the parietal cortex, previous results suggest that visual information about arm position is not strongly represented in area 5, although these results were obtained under conditions in which animals were not using their arms to interact with objects in their environment, which could have affected the relative weighting of relevant sensory signals. Here we examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and actively maintained their arm position at multiple locations in a frontal plane. On half of the trials both visual and nonvisual feedback of the endpoint of the arm were available, while on the other trials visual feedback was withheld. Many neurons were tuned to arm position, while a smaller number were modulated by the presence/absence of visual feedback. Visual modulation generally took the form of a decrease in both firing rate and variability with limb vision and was associated with more accurate decoding of position at the population level under these conditions. These findings support a multimodal representation of limb endpoint position in the SPL but suggest that visual signals are relatively weakly represented in this area, and only at the population level.
Walter, Sabrina; Quigley, Cliodhna; Mueller, Matthias M
2014-05-01
Performing a task across the left and right visual hemifields results in better performance than in a within-hemifield version of the task, termed the different-hemifield advantage. Although recent studies used transient stimuli that were presented with long ISIs, here we used a continuous objective electrophysiological (EEG) measure of competitive interactions for attentional processing resources in early visual cortex, the steady-state visual evoked potential (SSVEP). We frequency-tagged locations in each visual quadrant and at central fixation by flickering light-emitting diodes (LEDs) at different frequencies to elicit distinguishable SSVEPs. Stimuli were presented for several seconds, and participants were cued to attend to two LEDs either in one (Within) or distributed across left and right visual hemifields (Across). In addition, we introduced two reference measures: one for suppressive interactions between the peripheral LEDs by using a task at fixation where attention was withdrawn from the periphery and another estimating the upper bound of SSVEP amplitude by cueing participants to attend to only one of the peripheral LEDs. We found significantly greater SSVEP amplitude modulations in Across compared with Within hemifield conditions. No differences were found between SSVEP amplitudes elicited by the peripheral LEDs when participants attended to the centrally located LEDs compared with when peripheral LEDs had to be ignored in Across and Within trials. Attending to only one LED elicited the same SSVEP amplitude as Across conditions. Although behavioral data displayed a more complex pattern, SSVEP amplitudes were well in line with the predictions of the different-hemifield advantage account during sustained visuospatial attention.
Jingling, Li; Tseng, Chia-Huei; Zhaoping, Li
2013-09-10
Salient items usually capture attention and are beneficial to visual search. Jingling and Tseng (2013), nevertheless, have discovered that a salient collinear column can impair local visual search. The display used in that study had 21 rows and 27 columns of bars, all uniformly horizontal (or vertical) except for one column of bars orthogonally oriented to all other bars, making this unique column of collinear (or noncollinear) bars salient in the display. Observers discriminated an oblique target bar superimposed on one of the bars either in the salient column or in the background. Interestingly, responses were slower for a target in a salient collinear column than in the background. This opens a theoretical question of how contour integration interacts with salience computation, which is addressed here by an examination of how salience modulated the search impairment from the collinear column. We show that the collinear column needs to have a high orientation contrast with its neighbors to exert search interference. A collinear column of high contrast in color or luminance did not produce the same impairment. Our results show that orientation-defined salience interacted with collinear contour differently from other feature dimensions, which is consistent with the neuronal properties in V1.
Metacontrast masking and attention do not interact.
Agaoglu, Sevda; Breitmeyer, Bruno; Ogmen, Haluk
2016-07-01
Visual masking and attention have been known to control the transfer of information from sensory memory to visual short-term memory. A natural question is whether these processes operate independently or interact. Recent evidence suggests that studies that reported interactions between masking and attention suffered from ceiling and/or floor effects. The objective of the present study was to investigate whether metacontrast masking and attention interact by using an experimental design in which saturation effects are avoided. We asked observers to report the orientation of a target bar randomly selected from a display containing either two or six bars. The mask was a ring that surrounded the target bar. Attentional load was controlled by set-size and masking strength by the stimulus onset asynchrony between the target bar and the mask ring. We investigated interactions between masking and attention by analyzing two different aspects of performance: (i) the mean absolute response errors and (ii) the distribution of signed response errors. Our results show that attention affects observers' performance without interacting with masking. Statistical modeling of response errors suggests that attention and metacontrast masking exert their effects by independently modulating the probability of "guessing" behavior. Implications of our findings for models of attention are discussed.
Modulation of V1 Spike Response by Temporal Interval of Spatiotemporal Stimulus Sequence
Kim, Taekjun; Kim, HyungGoo R.; Kim, Kayeon; Lee, Choongkil
2012-01-01
The spike activity of single neurons of the primary visual cortex (V1) becomes more selective and reliable in response to wide-field natural scenes compared to smaller stimuli confined to the classical receptive field (RF). However, it is largely unknown what aspects of natural scenes increase the selectivity of V1 neurons. One hypothesis is that modulation by surround interaction is highly sensitive to small changes in spatiotemporal aspects of RF surround. Such a fine-tuned modulation would enable single neurons to hold information about spatiotemporal sequences of oriented stimuli, which extends the role of V1 neurons as a simple spatiotemporal filter confined to the RF. In the current study, we examined the hypothesis in the V1 of awake behaving monkeys, by testing whether the spike response of single V1 neurons is modulated by temporal interval of spatiotemporal stimulus sequence encompassing inside and outside the RF. We used two identical Gabor stimuli that were sequentially presented with a variable stimulus onset asynchrony (SOA): the preceding one (S1) outside the RF and the following one (S2) in the RF. This stimulus configuration enabled us to examine the spatiotemporal selectivity of response modulation from a focal surround region. Although S1 alone did not evoke spike responses, visual response to S2 was modulated for SOA in the range of tens of milliseconds. These results suggest that V1 neurons participate in processing spatiotemporal sequences of oriented stimuli extending outside the RF. PMID:23091631
Temporal kinetics of prefrontal modulation of the extrastriate cortex during visual attention.
Yago, Elena; Duarte, Audrey; Wong, Ting; Barceló, Francisco; Knight, Robert T
2004-12-01
Single-unit, event-related potential (ERP), and neuroimaging studies have implicated the prefrontal cortex (PFC) in top-down control of attention and working memory. We conducted an experiment in patients with unilateral PFC damage (n = 8) to assess the temporal kinetics of PFC-extrastriate interactions during visual attention. Subjects alternated attention between the left and the right hemifields in successive runs while they detected target stimuli embedded in streams of repetitive task-irrelevant stimuli (standards). The design enabled us to examine tonic (spatial selection) and phasic (feature selection) PFC-extrastriate interactions. PFC damage impaired performance in the visual field contralateral to lesions, as manifested by both larger reaction times and error rates. Assessment of the extrastriate P1 ERP revealed that the PFC exerts a tonic (spatial selection) excitatory input to the ipsilateral extrastriate cortex as early as 100 msec post stimulus delivery. The PFC exerts a second phasic (feature selection) excitatory extrastriate modulation from 180 to 300 msec, as evidenced by reductions in selection negativity after damage. Finally, reductions of the N2 ERP to target stimuli supports the notion that the PFC exerts a third phasic (target selection) signal necessary for successful template matching during postselection analysis of target features. The results provide electrophysiological evidence of three distinct tonic and phasic PFC inputs to the extrastriate cortex in the initial few hundred milliseconds of stimulus processing. Damage to this network appears to underlie the pervasive deficits in attention observed in patients with prefrontal lesions.
Thalamocortical interactions underlying visual fear conditioning in humans.
Lithari, Chrysa; Moratti, Stephan; Weisz, Nathan
2015-11-01
Despite a strong focus on the role of the amygdala in fear conditioning, recent works point to a more distributed network supporting fear conditioning. We aimed to elucidate interactions between subcortical and cortical regions in fear conditioning in humans. To do this, we used two fearful faces as conditioned stimuli (CS) and an electrical stimulation at the left hand, paired with one of the CS, as unconditioned stimulus (US). The luminance of the CS was rhythmically modulated leading to "entrainment" of brain oscillations at a predefined modulation frequency. Steady-state responses (SSR) were recorded by MEG. In addition to occipital regions, spectral analysis of SSR revealed increased power during fear conditioning particularly for thalamus and cerebellum contralateral to the upcoming US. Using thalamus and amygdala as seed-regions, directed functional connectivity was calculated to capture the modulation of interactions that underlie fear conditioning. Importantly, this analysis showed that the thalamus drives the fusiform area during fear conditioning, while amygdala captures the more general effect of fearful faces perception. This study confirms ideas from the animal literature, and demonstrates for the first time the central role of the thalamus in fear conditioning in humans. © 2015 Wiley Periodicals, Inc.
Designing Interactive Electronic Module in Chemistry Lessons
NASA Astrophysics Data System (ADS)
Irwansyah, F. S.; Lubab, I.; Farida, I.; Ramdhani, M. A.
2017-09-01
This research aims to design electronic module (e-module) oriented to the development of students’ chemical literacy on the solution colligative properties material. This research undergoes some stages including concept analysis, discourse analysis, storyboard design, design development, product packaging, validation, and feasibility test. Overall, this research undertakes three main stages, namely, Define (in the form of preliminary studies); Design (designing e-module); Develop (including validation and model trial). The concept presentation and visualization used in this e-module is oriented to chemical literacy skills. The presentation order carries aspects of scientific context, process, content, and attitude. Chemists and multi media experts have done the validation to test the initial quality of the products and give a feedback for the product improvement. The feasibility test results stated that the content presentation and display are valid and feasible to be used with the value of 85.77% and 87.94%. These values indicate that this e-module oriented to students’ chemical literacy skills for the solution colligative properties material is feasible to be used.
Structure of Zebrafish IRBP Reveals Fatty Acid Binding
Ghosh, Debashis; Haswell, Karen M.; Sprada, Molly; Gonzalez-Fernandez, Federico
2015-01-01
Interphotoreceptor retinoid-binding protein (IRBP) has a remarkable role in targeting and protecting all-trans and 11-cis retinol, and 11-cis retinal during the rod and cone visual cycles. Little is known about how the correct retinoid is efficiently delivered and removed from the correct cell at the required time. It has been proposed that different fatty composition at that the outer-segments and retinal-pigmented epithelium could have an important role is regulating the delivery and uptake of the visual cycle retinoids at the cell-interphotoreceptor-matrix interface. Although this suggests intriguing mechanisms for the role of local fatty acids in visual-cycle retinoid trafficking, nothing is known about the structural basis of IRBP-fatty acid interactions. Such regulation may be mediated through IRBP’s unusual repeating homologous modules, each containing about 300 amino acids. We have been investigating structure-function relationships of Zebrafish IRBP (zIRBP), which has only two tandem modules (z1 and z2), as a model for the more complex four-module mammalian IRBP’s. Here we report the first X-ray crystal structure of a teleost IRBP, and the only structure with a bound ligand. The X-ray structure of z1, determined at 1.90Å resolution, reveals a two-domain organization of the module (domains A and B). A deep hydrophobic pocket was identified within the N-terminal domain A. In fluorescence titrations assays, oleic acid displaced all-trans retinol from zIRBP. Our study, which provides the first structure of an IRBP with bound ligand, supports a potential role for fatty acids in regulating retinoid binding. PMID:26344741
Preparing Teachers to Support the Development of Climate Literate Students
NASA Astrophysics Data System (ADS)
Haddad, N.; Ledley, T. S.; Ellins, K. K.; Bardar, E. W.; Youngman, E.; Dunlap, C.; Lockwood, J.; Mote, A. S.; McNeal, K.; Libarkin, J. C.; Lynds, S. E.; Gold, A. U.
2014-12-01
The EarthLabs climate project includes curriculum development, teacher professional development, teacher leadership development, and research on student learning, all directed at increasing high school teachers' and students' understanding of the factors that shape our planet's climate. The project has developed four new modules which focus on climate literacy and which are part of the larger Web based EarthLabs collection of Earth science modules. Climate related themes highlighted in the new modules include the Earth system with its positive and negative feedback loops; the range of temporal and spatial scales at which climate, weather, and other Earth system processes occur; and the recurring question, "How do we know what we know about Earth's past and present climate?" which addresses proxy data and scientific instrumentation. EarthLabs climate modules use two central strategies to help students navigate the multiple challenges inherent in understanding climate science. The first is to actively engage students with the content by using a variety of learning modes, and by allowing students to pace themselves through interactive visualizations that address particularly challenging content. The second strategy, which is the focus of this presentation, is to support teachers in a subject area where few have substantive content knowledge or technical skills. Teachers who grasp the processes and interactions that give Earth its climate and the technical skills to engage with relevant data and visualizations are more likely to be successful in supporting students' understanding of climate's complexities. This presentation will briefly introduce the EarthLabs project and will describe the steps the project takes to prepare climate literate teachers, including Web based resources, teacher workshops, and the development of a cadre of teacher leaders who are prepared to continue leading the workshops after project funding ends.
An introduction to Space Weather Integrated Modeling
NASA Astrophysics Data System (ADS)
Zhong, D.; Feng, X.
2012-12-01
The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.
VisANT 3.0: new modules for pathway visualization, editing, prediction and construction.
Hu, Zhenjun; Ng, David M; Yamada, Takuji; Chen, Chunnuan; Kawashima, Shuichi; Mellor, Joe; Linghu, Bolan; Kanehisa, Minoru; Stuart, Joshua M; DeLisi, Charles
2007-07-01
With the integration of the KEGG and Predictome databases as well as two search engines for coexpressed genes/proteins using data sets obtained from the Stanford Microarray Database (SMD) and Gene Expression Omnibus (GEO) database, VisANT 3.0 supports exploratory pathway analysis, which includes multi-scale visualization of multiple pathways, editing and annotating pathways using a KEGG compatible visual notation and visualization of expression data in the context of pathways. Expression levels are represented either by color intensity or by nodes with an embedded expression profile. Multiple experiments can be navigated or animated. Known KEGG pathways can be enriched by querying either coexpressed components of known pathway members or proteins with known physical interactions. Predicted pathways for genes/proteins with unknown functions can be inferred from coexpression or physical interaction data. Pathways produced in VisANT can be saved as computer-readable XML format (VisML), graphic images or high-resolution Scalable Vector Graphics (SVG). Pathways in the format of VisML can be securely shared within an interested group or published online using a simple Web link. VisANT is freely available at http://visant.bu.edu.
Effect of attentional load on audiovisual speech perception: evidence from ERPs.
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.
Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica
2013-01-01
HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing.
Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica
2013-01-01
HIGHLIGHTS The redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing. PMID:23818879
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
NASA Astrophysics Data System (ADS)
Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.
2003-12-01
Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be used directly by classroom teachers.
Gueguen, Marc; Vuillerme, Nicolas; Isableu, Brice
2012-01-01
Background The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect. Methodology/Principal Findings Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks. PMID:22509295
Design and implementation of visualization methods for the CHANGES Spatial Decision Support System
NASA Astrophysics Data System (ADS)
Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan
2014-05-01
The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison of data from different locations and a time slider tool for monitoring changes in spatio-temporal data. All these techniques are part of the interactive interface of the system and make use of spatial and spatio-temporal data. Further significant aspects of the visualization component include conventional cartographic techniques and visualization of non-spatial data. The main expectation from the present work is to offer efficient visualization of risk-related data in order to facilitate the decision making process, which is the final purpose of the CHANGES SDSS. This work is part of the "CHANGES" project, funded by the European Community's 7th Framework Programme.
OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization
NASA Astrophysics Data System (ADS)
Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian
2018-03-01
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
Stock, Ann-Kathrin; Wascher, Edmund; Beste, Christian
2013-01-01
It is well-kown that sensory information influences the way we execute motor responses. However, less is known about if and how sensory and motor information are integrated in the subsequent process of response evaluation. We used a modified Simon Task to investigate how these streams of information are integrated in response evaluation processes, applying an in-depth neurophysiological analysis of event-related potentials (ERPs), time-frequency decomposition and sLORETA. The results show that response evaluation processes are differentially modulated by afferent proprioceptive information and efference copies. While the influence of proprioceptive information is mediated via oscillations in different frequency bands, efference copy based information about the motor execution is specifically mediated via oscillations in the theta frequency band. Stages of visual perception and attention were not modulated by the interaction of proprioception and motor efference copies. Brain areas modulated by the interactive effects of proprioceptive and efference copy based information included the middle frontal gyrus and the supplementary motor area (SMA), suggesting that these areas integrate sensory information for the purpose of response evaluation. The results show how motor response evaluation processes are modulated by information about both the execution and the location of a response. PMID:23658624
Ogulmus, Cansu; Karacaoglu, Merve; Kafaligonul, Hulusi
2018-03-01
The coordination of intramodal perceptual grouping and crossmodal interactions plays a critical role in constructing coherent multisensory percepts. However, the basic principles underlying such coordinating mechanisms still remain unclear. By taking advantage of an illusion called temporal ventriloquism and its influences on perceived speed, we investigated how audiovisual interactions in time are modulated by the spatial grouping principles of vision. In our experiments, we manipulated the spatial grouping principles of proximity, uniform connectedness, and similarity/common fate in apparent motion displays. Observers compared the speed of apparent motions across different sound timing conditions. Our results revealed that the effects of sound timing (i.e., temporal ventriloquism effects) on perceived speed also existed in visual displays containing more than one object and were modulated by different spatial grouping principles. In particular, uniform connectedness was found to modulate these audiovisual interactions in time. The effect of sound timing on perceived speed was smaller when horizontal connecting bars were introduced along the path of apparent motion. When the objects in each apparent motion frame were not connected or connected with vertical bars, the sound timing was more influential compared to the horizontal bar conditions. Overall, our findings here suggest that the effects of sound timing on perceived speed exist in different spatial configurations and can be modulated by certain intramodal spatial grouping principles such as uniform connectedness.
Direct visualization of critical hydrogen atoms in a pyridoxal 5'-phosphate enzyme.
Dajnowicz, Steven; Johnston, Ryne C; Parks, Jerry M; Blakeley, Matthew P; Keen, David A; Weiss, Kevin L; Gerlits, Oksana; Kovalevsky, Andrey; Mueser, Timothy C
2017-10-16
Enzymes dependent on pyridoxal 5'-phosphate (PLP, the active form of vitamin B 6 ) perform a myriad of diverse chemical transformations. They promote various reactions by modulating the electronic states of PLP through weak interactions in the active site. Neutron crystallography has the unique ability of visualizing the nuclear positions of hydrogen atoms in macromolecules. Here we present a room-temperature neutron structure of a homodimeric PLP-dependent enzyme, aspartate aminotransferase, which was reacted in situ with α-methylaspartate. In one monomer, the PLP remained as an internal aldimine with a deprotonated Schiff base. In the second monomer, the external aldimine formed with the substrate analog. We observe a deuterium equidistant between the Schiff base and the C-terminal carboxylate of the substrate, a position indicative of a low-barrier hydrogen bond. Quantum chemical calculations and a low-pH room-temperature X-ray structure provide insight into the physical phenomena that control the electronic modulation in aspartate aminotransferase.Pyridoxal 5'-phosphate (PLP) is a ubiquitous co factor for diverse enzymes, among them aspartate aminotransferase. Here the authors use neutron crystallography, which allows the visualization of the positions of hydrogen atoms, and computation to characterize the catalytic mechanism of the enzyme.
Interactive Maps on War and Peace: A WebGIS Application for Civic Education
NASA Astrophysics Data System (ADS)
Wirkus, Lars; Strunck, Alexander
2013-04-01
War and violent conflict are omnipresent-be it war in the Middle East, violent conflicts in failed states or increasing military expenditures and exports/ imports of military goods. To understand certain conflicts or peace processes and their possible interrelation, to conduct a well-founded political discussion and to support or influence decision-making, one matter is of special importance: easily accessible and, in particular, reliable data and information. Against this background, the Bonn International Center for Conversion (BICC) in close cooperation with the German Federal Agency for Civic Education (bpb) has been developing a map-based information portal on war and peace with various thematic modules for the latter's online service (http://sicherheitspolitik.bpb.de). The portal will eventually offer nine of such modules that are intended to give various target groups, such as interested members of the public, teachers and learners, policymakers and representatives of the media access to the required information in form of an interactive and country-based global overview or a comparison of different issues. Five thematic modules have been completed so far: War and conflict, peace and demobilization, military capacities, resources and conflict, conventional weapons. The portal offers a broad spectrum of different data processing and visualization tools. Its central feature is an interactive mapping component based on WebGIS and a relational database. Content and data provided through thematic maps in the form of WebGIS layers are generally supplemented by info graphics, data tables and short articles providing deeper knowledge on the respective issue. All modules and their sub-chapters are introduced by background texts. They put all interactive maps of a module into an appropriate context and help the users to also understand the interrelation between various layers. If a layer is selected, all corresponding texts and graphics are shown automatically below the map. Data tables are offered if the copyright of datasets allows such use. All data of all thematic modules is presented in country profiles in a consolidated manner. The portal has been created with Open Source Software. PostgreSQL and PostGIS, MapServer, OpenLayers, MapProxy and cmsmadesimple are combined to manipulate and transform global data sets into interactive thematic maps. A purpose-programmed layer selection menu enables users to select single layers or to combine up to three matching layers from all possible pre-set layer combinations. This applies both to fields of topics within a module and across various modules. Due to the complexity of the structure and visualization constraints, no more than three layers can be combined. The WebGIS-based information portal on war and peace is an excellent example of how GIS technologies can be used for education and outreach. Not only can they play a crucial role in supporting the educational mandate and mission of certain institutions. They can also directly support various target groups in obtaining the knowledge needed by providing a collection of straight forward designed, ready-to-use data, info graphics and maps.
WEB-GIS Decision Support System for CO2 storage
NASA Astrophysics Data System (ADS)
Gaitanaru, Dragos; Leonard, Anghel; Radu Gogu, Constantin; Le Guen, Yvi; Scradeanu, Daniel; Pagnejer, Mihaela
2013-04-01
Environmental decision support systems (DSS) paradigm evolves and changes as more knowledge and technology become available to the environmental community. Geographic Information Systems (GIS) can be used to extract, assess and disseminate some types of information, which are otherwise difficult to access by traditional methods. In the same time, with the help of the Internet and accompanying tools, creating and publishing online interactive maps has become easier and rich with options. The Decision Support System (MDSS) developed for the MUSTANG (A MUltiple Space and Time scale Approach for the quaNtification of deep saline formations for CO2 storaGe) project is a user friendly web based application that uses the GIS capabilities. MDSS can be exploited by the experts for CO2 injection and storage in deep saline aquifers. The main objective of the MDSS is to help the experts to take decisions based large structured types of data and information. In order to achieve this objective the MDSS has a geospatial objected-orientated database structure for a wide variety of data and information. The entire application is based on several principles leading to a series of capabilities and specific characteristics: (i) Open-Source - the entire platform (MDSS) is based on open-source technologies - (1) database engine, (2) application server, (3) geospatial server, (4) user interfaces, (5) add-ons, etc. (ii) Multiple database connections - MDSS is capable to connect to different databases that are located on different server machines. (iii)Desktop user experience - MDSS architecture and design follows the structure of a desktop software. (iv)Communication - the server side and the desktop are bound together by series functions that allows the user to upload, use, modify and download data within the application. The architecture of the system involves one database and a modular application composed by: (1) a visualization module, (2) an analysis module, (3) a guidelines module, and (4) a risk assessment module. The Database component is build by using the PostgreSQL and PostGIS open source technology. The visualization module allows the user to view data of CO2 injection sites in different ways: (1) geospatial visualization, (2) table view, (3) 3D visualization. The analysis module will allow the user to perform certain analysis like Injectivity, Containment and Capacity analysis. The Risk Assessment module focus on the site risk matrix approach. The Guidelines module contains the methodologies of CO2 injection and storage into deep saline aquifers guidelines.
Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening
Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus
2012-01-01
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668
A Hypermedia Training Module for the Navy’s P-3C Armament System
1993-07-01
procedures. Both aural , and visual cues are used throughout the program as necessary to alert the learner to specific items requiring his attention... learner the opportunity for a great deal of interactivity and feedback. The project is divided into five chapters including an introduction, review of the...literature, methodology, program description, and summary and conclusions. The literature review concentrates on the foliowing topics: adult learners
Advanced Visualization and Interactive Displays (AVID)
2009-04-01
decision maker. The ACESViewer architecture allows the users to pull data from databases, flat files, or user generated via scripting. The...of the equation and is of critical concern as it scales the needs of the polygon fill operations. Numerous users are now using two 30” cinema ...6 module configuration. Based on the architecture of the lab there was only one location that would be suitable without any viewing obstructions
Generic Space Science Visualization in 2D/3D using SDDAS
NASA Astrophysics Data System (ADS)
Mukherjee, J.; Murphy, Z. B.; Gonzalez, C. A.; Muller, M.; Ybarra, S.
2017-12-01
The Southwest Data Display and Analysis System (SDDAS) is a flexible multi-mission / multi-instrument software system intended to support space physics data analysis, and has been in active development for over 20 years. For the Magnetospheric Multi-Scale (MMS), Juno, Cluster, and Mars Express missions, we have modified these generic tools for visualizing data in two and three dimensions. The SDDAS software is open source and makes use of various other open source packages, including VTK and Qwt. The software offers interactive plotting as well as a Python and Lua module to modify the data before plotting. In theory, by writing a Lua or Python module to read the data, any data could be used. Currently, the software can natively read data in IDFS, CEF, CDF, FITS, SEG-Y, ASCII, and XLS formats. We have integrated the software with other Python packages such as SPICE and SpacePy. Included with the visualization software is a database application and other utilities for managing data that can retrieve data from the Cluster Active Archive and Space Physics Data Facility at Goddard, as well as other local archives. Line plots, spectrograms, geographic, volume plots, strip charts, etc. are just some of the types of plots one can generate with SDDAS. Furthermore, due to the design, output is not limited to strictly visualization as SDDAS can also be used to generate stand-alone IDL or Python visualization code.. Lastly, SDDAS has been successfully used as a backend for several web based analysis systems as well.
You prime what you code: The fAIM model of priming of pop-out
Meeter, Martijn
2017-01-01
Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as ‘color’ and for less obvious dimensions such as ‘spikiness’ of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers’ goals—without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search. PMID:29166386
Great expectations: top-down attention modulates the costs of clutter and eccentricity.
Steelman, Kelly S; McCarley, Jason S; Wickens, Christopher D
2013-12-01
An experiment and modeling effort examined interactions between bottom-up and top-down attentional control in visual alert detection. Participants performed a manual tracking task while monitoring peripheral display channels for alerts of varying salience, eccentricity, and spatial expectancy. Spatial expectancy modulated the influence of salience and eccentricity; alerts in low-probability locations engendered higher miss rates, longer detection times, and larger costs of visual clutter and eccentricity, indicating that top-down attentional control offset the costs of poor bottom-up stimulus quality. Data were compared to the predictions of a computational model of scanning and noticing that incorporates bottom-up and top-down sources of attentional control. The model accounted well for the overall pattern of miss rates and response times, predicting each of the observed main effects and interactions. Empirical results suggest that designers should expect the costs of poor bottom-up visibility to be greater for low expectancy signals, and that the placement of alerts within a display should be determined based on the combination of alert expectancy and response priority. Model fits suggest that the current model can serve as a useful tool for exploring a design space as a precursor to empirical data collection and for generating hypotheses for future experiments. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin
2014-01-01
This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.
Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S
2017-09-06
The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.
Contextual modulation and stimulus selectivity in extrastriate cortex.
Krause, Matthew R; Pack, Christopher C
2014-11-01
Contextual modulation is observed throughout the visual system, using techniques ranging from single-neuron recordings to behavioral experiments. Its role in generating feature selectivity within the retina and primary visual cortex has been extensively described in the literature. Here, we describe how similar computations can also elaborate feature selectivity in the extrastriate areas of both the dorsal and ventral streams of the primate visual system. We discuss recent work that makes use of normalization models to test specific roles for contextual modulation in visual cortex function. We suggest that contextual modulation renders neuronal populations more selective for naturalistic stimuli. Specifically, we discuss contextual modulation's role in processing optic flow in areas MT and MST and for representing naturally occurring curvature and contours in areas V4 and IT. We also describe how the circuitry that supports contextual modulation is robust to variations in overall input levels. Finally, we describe how this theory relates to other hypothesized roles for contextual modulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Direct visualization of critical hydrogen atoms in a pyridoxal 5'-phosphate enzyme
Dajnowicz, Steven; Johnston, Ryne C.; Parks, Jerry M.; ...
2017-10-16
Enzymes dependent on pyridoxal 5'-phosphate (PLP, the active form of vitamin B6) perform a myriad of diverse chemical transformations. They promote various reactions by modulating the electronic states of PLP through weak interactions in the active site. Neutron crystallography has the unique ability of visualizing the nuclear positions of hydrogen atoms in macromolecules. Here we present a room-temperature neutron structure of a homodimeric PLP-dependent enzyme, aspartate aminotransferase, which was reacted in situ with α-methylaspartate. In one monomer, the PLP remained as an internal aldimine with a deprotonated Schiff base. In the second monomer, the external aldimine formed with the substratemore » analog. We observe a deuterium equidistant between the Schiff base and the C-terminal carboxylate of the substrate, a position indicative of a low-barrier hydrogen bond. As a result, quantum chemical calculations and a low-pH room-temperature X-ray structure provide insight into the physical phenomena that control the electronic modulation in aspartate aminotransferase.« less
Direct visualization of critical hydrogen atoms in a pyridoxal 5'-phosphate enzyme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dajnowicz, Steven; Johnston, Ryne C.; Parks, Jerry M.
Enzymes dependent on pyridoxal 5'-phosphate (PLP, the active form of vitamin B6) perform a myriad of diverse chemical transformations. They promote various reactions by modulating the electronic states of PLP through weak interactions in the active site. Neutron crystallography has the unique ability of visualizing the nuclear positions of hydrogen atoms in macromolecules. Here we present a room-temperature neutron structure of a homodimeric PLP-dependent enzyme, aspartate aminotransferase, which was reacted in situ with α-methylaspartate. In one monomer, the PLP remained as an internal aldimine with a deprotonated Schiff base. In the second monomer, the external aldimine formed with the substratemore » analog. We observe a deuterium equidistant between the Schiff base and the C-terminal carboxylate of the substrate, a position indicative of a low-barrier hydrogen bond. As a result, quantum chemical calculations and a low-pH room-temperature X-ray structure provide insight into the physical phenomena that control the electronic modulation in aspartate aminotransferase.« less
Sleep inertia, sleep homeostatic, and circadian influences on higher-order cognitive functions
Ronda, Joseph M.; Czeisler, Charles A.; Wright, Kenneth P.
2016-01-01
Summary Sleep inertia, sleep homeostatic, and circadian processes modulate cognition, including reaction time, memory, mood, and alertness. How these processes influence higher-order cognitive functions is not well known. Six participants completed a 73-daylong study that included two 14-daylong 28h forced desynchrony protocols, to examine separate and interacting influences of sleep inertia, sleep homeostasis, and circadian phase on higher-order cognitive functions of inhibitory control and selective visual attention. Cognitive performance for most measures was impaired immediately after scheduled awakening and improved over the first ~2-4h of wakefulness (sleep inertia); worsened thereafter until scheduled bedtime (sleep homeostasis); and was worst at ~60° and best at ~240° (circadian modulation, with worst and best phases corresponding to ~9AM and ~9PM respectively, in individuals with a habitual waketime of 7AM). The relative influences of sleep inertia, sleep homeostasis, and circadian phase depended on the specific higher-order cognitive function task examined. Inhibitory control appeared to be modulated most strongly by circadian phase, whereas selective visual attention for a spatial-configuration search task was modulated most strongly by sleep inertia. These findings demonstrate that some higher-order cognitive processes are differentially sensitive to different sleep-wake regulatory processes. Differential modulation of cognitive functions by different sleep-wake regulatory processes has important implications for understanding mechanisms contributing to performance impairments during adverse circadian phases, sleep deprivation, and/or upon awakening from sleep. PMID:25773686
Simulation environment and graphical visualization environment: a COPD use-case
2014-01-01
Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327
Banca, Paula; Sousa, Teresa; Duarte, Isabel Catarina; Castelo-Branco, Miguel
2015-12-01
Current approaches in neurofeedback/brain-computer interface research often focus on identifying, on a subject-by-subject basis, the neural regions that are best suited for self-driven modulation. It is known that the hMT+/V5 complex, an early visual cortical region, is recruited during explicit and implicit motion imagery, in addition to real motion perception. This study tests the feasibility of training healthy volunteers to regulate the level of activation in their hMT+/V5 complex using real-time fMRI neurofeedback and visual motion imagery strategies. We functionally localized the hMT+/V5 complex to further use as a target region for neurofeedback. An uniform strategy based on motion imagery was used to guide subjects to neuromodulate hMT+/V5. We found that 15/20 participants achieved successful neurofeedback. This modulation led to the recruitment of a specific network as further assessed by psychophysiological interaction analysis. This specific circuit, including hMT+/V5, putative V6 and medial cerebellum was activated for successful neurofeedback runs. The putamen and anterior insula were recruited for both successful and non-successful runs. Our findings indicate that hMT+/V5 is a region that can be modulated by focused imagery and that a specific cortico-cerebellar circuit is recruited during visual motion imagery leading to successful neurofeedback. These findings contribute to the debate on the relative potential of extrinsic (sensory) versus intrinsic (default-mode) brain regions in the clinical application of neurofeedback paradigms. This novel circuit might be a good target for future neurofeedback approaches that aim, for example, the training of focused attention in disorders such as ADHD.
NASA Astrophysics Data System (ADS)
Banca, Paula; Sousa, Teresa; Catarina Duarte, Isabel; Castelo-Branco, Miguel
2015-12-01
Objective. Current approaches in neurofeedback/brain-computer interface research often focus on identifying, on a subject-by-subject basis, the neural regions that are best suited for self-driven modulation. It is known that the hMT+/V5 complex, an early visual cortical region, is recruited during explicit and implicit motion imagery, in addition to real motion perception. This study tests the feasibility of training healthy volunteers to regulate the level of activation in their hMT+/V5 complex using real-time fMRI neurofeedback and visual motion imagery strategies. Approach. We functionally localized the hMT+/V5 complex to further use as a target region for neurofeedback. An uniform strategy based on motion imagery was used to guide subjects to neuromodulate hMT+/V5. Main results. We found that 15/20 participants achieved successful neurofeedback. This modulation led to the recruitment of a specific network as further assessed by psychophysiological interaction analysis. This specific circuit, including hMT+/V5, putative V6 and medial cerebellum was activated for successful neurofeedback runs. The putamen and anterior insula were recruited for both successful and non-successful runs. Significance. Our findings indicate that hMT+/V5 is a region that can be modulated by focused imagery and that a specific cortico-cerebellar circuit is recruited during visual motion imagery leading to successful neurofeedback. These findings contribute to the debate on the relative potential of extrinsic (sensory) versus intrinsic (default-mode) brain regions in the clinical application of neurofeedback paradigms. This novel circuit might be a good target for future neurofeedback approaches that aim, for example, the training of focused attention in disorders such as ADHD.
NASA Astrophysics Data System (ADS)
Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong
2015-03-01
Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.
SPICE Module for the Satellite Orbit Analysis Program (SOAP)
NASA Technical Reports Server (NTRS)
Coggi, John; Carnright, Robert; Hildebrand, Claude
2008-01-01
A SPICE module for the Satellite Orbit Analysis Program (SOAP) precisely represents complex motion and maneuvers in an interactive, 3D animated environment with support for user-defined quantitative outputs. (SPICE stands for Spacecraft, Planet, Instrument, Camera-matrix, and Events). This module enables the SOAP software to exploit NASA mission ephemeris represented in the JPL Ancillary Information Facility (NAIF) SPICE formats. Ephemeris types supported include position, velocity, and orientation for spacecraft and planetary bodies including the Sun, planets, natural satellites, comets, and asteroids. Entire missions can now be imported into SOAP for 3D visualization, playback, and analysis. The SOAP analysis and display features can now leverage detailed mission files to offer the analyst both a numerically correct and aesthetically pleasing combination of results that can be varied to study many hypothetical scenarios. The software provides a modeling and simulation environment that can encompass a broad variety of problems using orbital prediction. For example, ground coverage analysis, communications analysis, power and thermal analysis, and 3D visualization that provide the user with insight into complex geometric relations are included. The SOAP SPICE module allows distributed science and engineering teams to share common mission models of known pedigree, which greatly reduces duplication of effort and the potential for error. The use of the software spans all phases of the space system lifecycle, from the study of future concepts to operations and anomaly analysis. It allows SOAP software to correctly position and orient all of the principal bodies of the Solar System within a single simulation session along with multiple spacecraft trajectories and the orientation of mission payloads. In addition to the 3D visualization, the user can define numeric variables and x-y plots to quantitatively assess metrics of interest.
Hollingworth, Andrew; Matsukura, Michi; Luck, Steven J.
2013-01-01
In three experiments, we examined the influence of visual working memory (VWM) on the metrics of saccade landing position in a global effect paradigm. Participants executed a saccade to the more eccentric object in an object pair appearing on the horizontal midline, to the left or right of central fixation. While completing the saccade task, participants maintained a color in VWM for an unrelated memory task. Either the color of the saccade target matched the memory color (target match), the color of the distractor matched the memory color (distractor match), or the colors of neither object matched the memory color (no match). In the no-match condition, saccades tended to land at the midpoint between the two objects: the global, or averaging, effect. However, when one of the two objects matched VWM, the distribution of landing position shifted toward the matching object, both for target match and for distractor match. VWM modulation of landing position was observed even for the fastest quartile of saccades, with a mean latency as low as 112 ms. Effects of VWM on such rapidly generated saccades, with latencies in the express-saccade range, indicate that VWM interacts with the initial sweep of visual sensory processing, modulating perceptual input to oculomotor systems and thereby biasing oculomotor selection. As a result, differences in memory match produce effects on landing position similar to the effects generated by differences in physical salience. PMID:24190909
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
2014-01-01
Background DNA repeats, such as transposable elements, minisatellites and palindromic sequences, are abundant in sequences and have been shown to have significant and functional roles in the evolution of the host genomes. In a previous study, we introduced the concept of a repeat DNA module, a flexible motif present in at least two occurences in the sequences. This concept was embedded into ModuleOrganizer, a tool allowing the detection of repeat modules in a set of sequences. However, its implementation remains difficult for larger sequences. Results Here we present Visual ModuleOrganizer, a Java graphical interface that enables a new and optimized version of the ModuleOrganizer tool. To implement this version, it was recoded in C++ with compressed suffix tree data structures. This leads to less memory usage (at least 120-fold decrease in average) and decreases by at least four the computation time during the module detection process in large sequences. Visual ModuleOrganizer interface allows users to easily choose ModuleOrganizer parameters and to graphically display the results. Moreover, Visual ModuleOrganizer dynamically handles graphical results through four main parameters: gene annotations, overlapping modules with known annotations, location of the module in a minimal number of sequences, and the minimal length of the modules. As a case study, the analysis of FoldBack4 sequences clearly demonstrated that our tools can be extended to comparative and evolutionary analyses of any repeat sequence elements in a set of genomic sequences. With the increasing number of sequences available in public databases, it is now possible to perform comparative analyses of repeated DNA modules in a graphic and friendly manner within a reasonable time period. Availability Visual ModuleOrganizer interface and the new version of the ModuleOrganizer tool are freely available at: http://lcb.cnrs-mrs.fr/spip.php?rubrique313. PMID:24678954
Multiple Transient Signals in Human Visual Cortex Associated with an Elementary Decision
Nolte, Guido
2017-01-01
The cerebral cortex continuously undergoes changes in its state, which are manifested in transient modulations of the cortical power spectrum. Cortical state changes also occur at full wakefulness and during rapid cognitive acts, such as perceptual decisions. Previous studies found a global modulation of beta-band (12–30 Hz) activity in human and monkey visual cortex during an elementary visual decision: reporting the appearance or disappearance of salient visual targets surrounded by a distractor. The previous studies disentangled neither the motor action associated with behavioral report nor other secondary processes, such as arousal, from perceptual decision processing per se. Here, we used magnetoencephalography in humans to pinpoint the factors underlying the beta-band modulation. We found that disappearances of a salient target were associated with beta-band suppression, and target reappearances with beta-band enhancement. This was true for both overt behavioral reports (immediate button presses) and silent counting of the perceptual events. This finding indicates that the beta-band modulation was unrelated to the execution of the motor act associated with a behavioral report of the perceptual decision. Further, changes in pupil-linked arousal, fixational eye movements, or gamma-band responses were not necessary for the beta-band modulation. Together, our results suggest that the beta-band modulation was a top-down signal associated with the process of converting graded perceptual signals into a categorical format underlying flexible behavior. This signal may have been fed back from brain regions involved in decision processing to visual cortex, thus enforcing a “decision-consistent” cortical state. SIGNIFICANCE STATEMENT Elementary visual decisions are associated with a rapid state change in visual cortex, indexed by a modulation of neural activity in the beta-frequency range. Such decisions are also followed by other events that might affect the state of visual cortex, including the motor command associated with the report of the decision, an increase in pupil-linked arousal, fixational eye movements, and fluctuations in bottom-up sensory processing. Here, we ruled out the necessity of these events for the beta-band modulation of visual cortex. We propose that the modulation reflects a decision-related state change, which is induced by the conversion of graded perceptual signals into a categorical format underlying behavior. The resulting decision signal may be fed back to visual cortex. PMID:28495972
Effect of attentional load on audiovisual speech perception: evidence from ERPs
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922
Audio-visual speech perception: a developmental ERP investigation
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada
2013-01-01
Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031
It's how you get there: walking down a virtual alley activates premotor and parietal areas.
Wagner, Johanna; Solis-Escalante, Teodoro; Scherer, Reinhold; Neuper, Christa; Müller-Putz, Gernot
2014-01-01
Voluntary drive is crucial for motor learning, therefore we are interested in the role that motor planning plays in gait movements. In this study we examined the impact of an interactive Virtual Environment (VE) feedback task on the EEG patterns during robot assisted walking. We compared walking in the VE modality to two control conditions: walking with a visual attention paradigm, in which visual stimuli were unrelated to the motor task; and walking with mirror feedback, in which participants observed their own movements. Eleven healthy participants were considered. Application of independent component analysis to the EEG revealed three independent component clusters in premotor and parietal areas showing increased activity during walking with the adaptive VE training paradigm compared to the control conditions. During the interactive VE walking task spectral power in frequency ranges 8-12, 15-20, and 23-40 Hz was significantly (p ≤ 0.05) decreased. This power decrease is interpreted as a correlate of an active cortical area. Furthermore activity in the premotor cortex revealed gait cycle related modulations significantly different (p ≤ 0.05) from baseline in the frequency range 23-40 Hz during walking. These modulations were significantly (p ≤ 0.05) reduced depending on gait cycle phases in the interactive VE walking task compared to the control conditions. We demonstrate that premotor and parietal areas show increased activity during walking with the adaptive VE training paradigm, when compared to walking with mirror- and movement unrelated feedback. Previous research has related a premotor-parietal network to motor planning and motor intention. We argue that movement related interactive feedback enhances motor planning and motor intention. We hypothesize that this might improve gait recovery during rehabilitation.
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.
Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R
2008-03-01
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.
2011-01-01
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper. PMID:21410968
Kamel Boulos, Maged N; Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-03-16
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper.
Schwartz, Sophie; Vuilleumier, Patrik; Hutton, Chloe; Maravita, Angelo; Dolan, Raymond J; Driver, Jon
2005-06-01
Perceptual suppression of distractors may depend on both endogenous and exogenous factors, such as attentional load of the current task and sensory competition among simultaneous stimuli, respectively. We used functional magnetic resonance imaging (fMRI) to compare these two types of attentional effects and examine how they may interact in the human brain. We varied the attentional load of a visual monitoring task performed on a rapid stream at central fixation without altering the central stimuli themselves, while measuring the impact on fMRI responses to task-irrelevant peripheral checkerboards presented either unilaterally or bilaterally. Activations in visual cortex for irrelevant peripheral stimulation decreased with increasing attentional load at fixation. This relative decrease was present even in V1, but became larger for successive visual areas through to V4. Decreases in activation for contralateral peripheral checkerboards due to higher central load were more pronounced within retinotopic cortex corresponding to 'inner' peripheral locations relatively near the central targets than for more eccentric 'outer' locations, demonstrating a predominant suppression of nearby surround rather than strict 'tunnel vision' during higher task load at central fixation. Contralateral activations for peripheral stimulation in one hemifield were reduced by competition with concurrent stimulation in the other hemifield only in inferior parietal cortex, not in retinotopic areas of occipital visual cortex. In addition, central attentional load interacted with competition due to bilateral versus unilateral peripheral stimuli specifically in posterior parietal and fusiform regions. These results reveal that task-dependent attentional load, and interhemifield stimulus-competition, can produce distinct influences on the neural responses to peripheral visual stimuli within the human visual system. These distinct mechanisms in selective visual processing may be integrated within posterior parietal areas, rather than earlier occipital cortex.
NASA Astrophysics Data System (ADS)
Henning, G. Bruce
2004-04-01
A modification and extension of Kortum and Geisler's model [Vision Res. 35, 1595 (1995)] of early visual nonlinearities that incorporates an expansive nonlinearity (consistent with neurophysiological findings [Vision Res. 35, 2725 (1995)], a normalization based on a local average retinal illumination, similar to Mach's proposal [F. Ratliff, Mach Bands: Quantitative Studies on Neural Networks in the Retina (Holden-Day, San Francisco, Calif., 1965)], and a subsequent compression suggested by Henning et al. [J. Opt. Soc. Am A 17, 1147 (2000)] captures a range of hitherto unexplained interactions between a sinusoidal grating of low spatial frequency and a contrast-modulated grating 2 octaves higher in spatial frequency.
Novel graphical environment for virtual and real-world operations of tracked mobile manipulators
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.
1993-08-01
A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Modulation of visual physiology by behavioral state in monkeys, mice, and flies.
Maimon, Gaby
2011-08-01
When a monkey attends to a visual stimulus, neurons in visual cortex respond differently to that stimulus than when the monkey attends elsewhere. In the 25 years since the initial discovery, the study of attention in primates has been central to understanding flexible visual processing. Recent experiments demonstrate that visual neurons in mice and fruit flies are modulated by locomotor behaviors, like running and flying, in a manner that resembles attention-based modulations in primates. The similar findings across species argue for a more generalized view of state-dependent sensory processing and for a renewed dialogue among vertebrate and invertebrate research communities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Emotional modulation of body-selective visual areas.
Peelen, Marius V; Atkinson, Anthony P; Andersson, Frederic; Vuilleumier, Patrik
2007-12-01
Emotionally expressive faces have been shown to modulate activation in visual cortex, including face-selective regions in ventral temporal lobe. Here, we tested whether emotionally expressive bodies similarly modulate activation in body-selective regions. We show that dynamic displays of bodies with various emotional expressions vs neutral bodies, produce significant activation in two distinct body-selective visual areas, the extrastriate body area and the fusiform body area. Multi-voxel pattern analysis showed that the strength of this emotional modulation was related, on a voxel-by-voxel basis, to the degree of body selectivity, while there was no relation with the degree of selectivity for faces. Across subjects, amygdala responses to emotional bodies positively correlated with the modulation of body-selective areas. Together, these results suggest that emotional cues from body movements produce topographically selective influences on category-specific populations of neurons in visual cortex, and these increases may implicate discrete modulatory projections from the amygdala.
Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Evaluation of an Interactive Undergraduate Cosmology Curriculum
NASA Astrophysics Data System (ADS)
White, Aaron; Coble, Kimberly A.; Martin, Dominique; Hayes, Patrycia; Targett, Tom; Cominsky, Lynn R.
2018-06-01
The Big Ideas in Cosmology is an immersive set of web-based learning modules that integrates text, figures, and visualizations with short and long interactive tasks as well as labs that allow students to manipulate and analyze real cosmological data. This enables the transformation of general education astronomy and cosmology classes from primarily lecture and book-based courses to a format that builds important STEM skills, while engaging those outside the field with modern discoveries and a more realistic sense of practices and tools used by professional astronomers. Over two semesters, we field-tested the curriculum in general education cosmology classes at a state university in California [N ~ 80]. We administered pre- and post-instruction multiple-choice and open-ended content surveys as well as the CLASS, to gauge the effectiveness of the course and modules. Questions addressed included the structure, composition, and evolution of the universe, including students’ reasoning and “how we know.”Module development and evaluation was supported by NASA ROSES E/PO Grant #NNXl0AC89G, the Illinois Space Grant Consortium, the Fermi E/PO program, Sonoma State University’s Space Science Education and Public Outreach Group, and San Francisco State University. The modules are published by Great River Learning/Kendall-Hunt.
Shooner, Christopher; Kelly, Jenna G.; García-Marín, Virginia; Movshon, J. Anthony; Kiorpes, Lynne
2017-01-01
In amblyopia, a visual disorder caused by abnormal visual experience during development, the amblyopic eye (AE) loses visual sensitivity whereas the fellow eye (FE) is largely unaffected. Binocular vision in amblyopes is often disrupted by interocular suppression. We used 96-electrode arrays to record neurons and neuronal groups in areas V1 and V2 of six female macaque monkeys (Macaca nemestrina) made amblyopic by artificial strabismus or anisometropia in early life, as well as two visually normal female controls. To measure suppressive binocular interactions directly, we recorded neuronal responses to dichoptic stimulation. We stimulated both eyes simultaneously with large sinusoidal gratings, controlling their contrast independently with raised-cosine modulators of different orientations and spatial frequencies. We modeled each eye's receptive field at each cortical site using a difference of Gaussian envelopes and derived estimates of the strength of central excitation and surround suppression. We used these estimates to calculate ocular dominance separately for excitation and suppression. Excitatory drive from the FE dominated amblyopic visual cortex, especially in more severe amblyopes, but suppression from both the FE and AEs was prevalent in all animals. This imbalance created strong interocular suppression in deep amblyopes: increasing contrast in the AE decreased responses at binocular cortical sites. These response patterns reveal mechanisms that likely contribute to the interocular suppression that disrupts vision in amblyopes. SIGNIFICANCE STATEMENT Amblyopia is a developmental visual disorder that alters both monocular vision and binocular interaction. Using microelectrode arrays, we examined binocular interaction in primary visual cortex and V2 of six amblyopic macaque monkeys (Macaca nemestrina) and two visually normal controls. By stimulating the eyes dichoptically, we showed that, in amblyopic cortex, the binocular combination of signals is altered. The excitatory influence of the two eyes is imbalanced to a degree that can be predicted from the severity of amblyopia, whereas suppression from both eyes is prevalent in all animals. This altered balance of excitation and suppression reflects mechanisms that may contribute to the interocular perceptual suppression that disrupts vision in amblyopes. PMID:28743725
Hallum, Luke E; Shooner, Christopher; Kumbhani, Romesh D; Kelly, Jenna G; García-Marín, Virginia; Majaj, Najib J; Movshon, J Anthony; Kiorpes, Lynne
2017-08-23
In amblyopia, a visual disorder caused by abnormal visual experience during development, the amblyopic eye (AE) loses visual sensitivity whereas the fellow eye (FE) is largely unaffected. Binocular vision in amblyopes is often disrupted by interocular suppression. We used 96-electrode arrays to record neurons and neuronal groups in areas V1 and V2 of six female macaque monkeys ( Macaca nemestrina ) made amblyopic by artificial strabismus or anisometropia in early life, as well as two visually normal female controls. To measure suppressive binocular interactions directly, we recorded neuronal responses to dichoptic stimulation. We stimulated both eyes simultaneously with large sinusoidal gratings, controlling their contrast independently with raised-cosine modulators of different orientations and spatial frequencies. We modeled each eye's receptive field at each cortical site using a difference of Gaussian envelopes and derived estimates of the strength of central excitation and surround suppression. We used these estimates to calculate ocular dominance separately for excitation and suppression. Excitatory drive from the FE dominated amblyopic visual cortex, especially in more severe amblyopes, but suppression from both the FE and AEs was prevalent in all animals. This imbalance created strong interocular suppression in deep amblyopes: increasing contrast in the AE decreased responses at binocular cortical sites. These response patterns reveal mechanisms that likely contribute to the interocular suppression that disrupts vision in amblyopes. SIGNIFICANCE STATEMENT Amblyopia is a developmental visual disorder that alters both monocular vision and binocular interaction. Using microelectrode arrays, we examined binocular interaction in primary visual cortex and V2 of six amblyopic macaque monkeys ( Macaca nemestrina ) and two visually normal controls. By stimulating the eyes dichoptically, we showed that, in amblyopic cortex, the binocular combination of signals is altered. The excitatory influence of the two eyes is imbalanced to a degree that can be predicted from the severity of amblyopia, whereas suppression from both eyes is prevalent in all animals. This altered balance of excitation and suppression reflects mechanisms that may contribute to the interocular perceptual suppression that disrupts vision in amblyopes. Copyright © 2017 the authors 0270-6474/17/378216-11$15.00/0.
Brown, Nicholas G.; VanderLinden, Ryan; Watson, Edmond R.; ...
2015-03-30
For many E3 ligases, a mobile RING (Really Interesting New Gene) domain stimulates ubiquitin (Ub) transfer from a thioester-linked E2~Ub intermediate to a lysine on a remotely bound disordered substrate. One such E3 is the gigantic, multisubunit 1.2-MDa anaphase-promoting complex/cyclosome (APC), which controls cell division by ubiquitinating cell cycle regulators to drive their timely degradation. Intrinsically disordered substrates are typically recruited via their KEN-box, D-box, and/or other motifs binding to APC and a coactivator such as CDH1. On the opposite side of the APC, the dynamic catalytic core contains the cullin-like subunit APC2 and its RING partner APC11, which collaboratesmore » with the E2 UBCH10 (UBE2C) to ubiquitinate substrates. However, how dynamic RING–E2~Ub catalytic modules such as APC11–UBCH10~Ub collide with distally tethered disordered substrates remains poorly understood. In this paper, we report structural mechanisms of UBCH10 recruitment to APC CDH1 and substrate ubiquitination. Unexpectedly, in addition to binding APC11’s RING, UBCH10 is corecruited via interactions with APC2, which we visualized in a trapped complex representing an APC CDH1–UBCH10~Ub–substrate intermediate by cryo-electron microscopy, and in isolation by X-ray crystallography. To our knowledge, this is the first structural view of APC, or any cullin–RING E3, with E2 and substrate juxtaposed, and it reveals how tripartite cullin–RING–E2 interactions establish APC’s specificity for UBCH10 and harness a flexible catalytic module to drive ubiquitination of lysines within an accessible zone. Finally, we propose that multisite interactions reduce the degrees of freedom available to dynamic RING E3–E2~Ub catalytic modules, condense the search radius for target lysines, increase the chance of active-site collision with conformationally fluctuating substrates, and enable regulation.« less
Top-Down Beta Enhances Bottom-Up Gamma
Thompson, William H.
2017-01-01
Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697
Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.
2012-01-01
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047
Detection of a Novel Mechanism of Acousto-Optic Modulation of Incoherent Light
Jarrett, Christopher W.; Caskey, Charles F.; Gore, John C.
2014-01-01
A novel form of acoustic modulation of light from an incoherent source has been detected in water as well as in turbid media. We demonstrate that patterns of modulated light intensity appear to propagate as the optical shadow of the density variations caused by ultrasound within an illuminated ultrasonic focal zone. This pattern differs from previous reports of acousto-optical interactions that produce diffraction effects that rely on phase shifts and changes in light directions caused by the acoustic modulation. Moreover, previous studies of acousto-optic interactions have mainly reported the effects of sound on coherent light sources via photon tagging, and/or the production of diffraction phenomena from phase effects that give rise to discrete sidebands. We aimed to assess whether the effects of ultrasound modulation of the intensity of light from an incoherent light source could be detected directly, and how the acoustically modulated (AOM) light signal depended on experimental parameters. Our observations suggest that ultrasound at moderate intensities can induce sufficiently large density variations within a uniform medium to cause measurable modulation of the intensity of an incoherent light source by absorption. Light passing through a region of high intensity ultrasound then produces a pattern that is the projection of the density variations within the region of their interaction. The patterns exhibit distinct maxima and minima that are observed at locations much different from those predicted by Raman-Nath, Bragg, or other diffraction theory. The observed patterns scaled appropriately with the geometrical magnification and sound wavelength. We conclude that these observed patterns are simple projections of the ultrasound induced density changes which cause spatial and temporal variations of the optical absorption within the illuminated sound field. These effects potentially provide a novel method for visualizing sound fields and may assist the interpretation of other hybrid imaging methods. PMID:25105880
NASA Astrophysics Data System (ADS)
Kilb, D. L.; Fundis, A. T.; Risien, C. M.
2012-12-01
The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.
Spiegel, Daniel P.; Hansen, Bruce C.; Byblow, Winston D.; Thompson, Benjamin
2012-01-01
Transcranial direct current stimulation (tDCS) is a safe, non-invasive technique for transiently modulating the balance of excitation and inhibition within the human brain. It has been reported that anodal tDCS can reduce both GABA mediated inhibition and GABA concentration within the human motor cortex. As GABA mediated inhibition is thought to be a key modulator of plasticity within the adult brain, these findings have broad implications for the future use of tDCS. It is important, therefore, to establish whether tDCS can exert similar effects within non-motor brain areas. The aim of this study was to assess whether anodal tDCS could reduce inhibitory interactions within the human visual cortex. Psychophysical measures of surround suppression were used as an index of inhibition within V1. Overlay suppression, which is thought to originate within the lateral geniculate nucleus (LGN), was also measured as a control. Anodal stimulation of the occipital poles significantly reduced psychophysical surround suppression, but had no effect on overlay suppression. This effect was specific to anodal stimulation as cathodal stimulation had no effect on either measure. These psychophysical results provide the first evidence for tDCS-induced reductions of intracortical inhibition within the human visual cortex. PMID:22563485
Doi, Hirokazu; Shinohara, Kazuyuki
2015-03-01
Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mohon, N.
A 'simulator' is defined as a machine which imitates the behavior of a real system in a very precise manner. The major components of a simulator and their interaction are outlined in brief form, taking into account the major components of an aircraft flight simulator. Particular attention is given to the visual display portion of the simulator, the basic components of the display, their interactions, and their characteristics. Real image displays are considered along with virtual image displays, and image generators. Attention is given to an advanced simulator for pilot training, a holographic pancake window, a scan laser image generator, the construction of an infrared target simulator, and the Apollo Command Module Simulator.
Stokes, Mark; Nobre, Anna C.; Rushworth, Matthew F. S.
2013-01-01
Using multivoxel pattern analysis (MVPA), we studied how distributed visual representations in human occipitotemporal cortex are modulated by attention and link their modulation to concurrent activity in frontal and parietal cortex. We detected similar occipitotemporal patterns during a simple visuoperceptual task and an attention-to-working-memory task in which one or two stimuli were cued before being presented among other pictures. Pattern strength varied from highest to lowest when the stimulus was the exclusive focus of attention, a conjoint focus, and when it was potentially distracting. Although qualitatively similar effects were seen inside regions relatively specialized for the stimulus category and outside, the former were quantitatively stronger. By regressing occipitotemporal pattern strength against activity elsewhere in the brain, we identified frontal and parietal areas exerting top-down control over, or reading information out from, distributed patterns in occipitotemporal cortex. Their interactions with patterns inside regions relatively specialized for that stimulus category were higher than those with patterns outside those regions and varied in strength as a function of the attentional condition. One area, the frontal operculum, was distinguished by selectively interacting with occipitotemporal patterns only when they were the focus of attention. There was no evidence that any frontal or parietal area actively inhibited occipitotemporal representations even when they should be ignored and were suppressed. Using MVPA to decode information within these frontal and parietal areas showed that they contained information about attentional context and/or readout information from occipitotemporal cortex to guide behavior but that frontal regions lacked information about category identity. PMID:24133250
Stimulus competition mediates the joint effects of spatial and feature-based attention
White, Alex L.; Rolfs, Martin; Carrasco, Marisa
2015-01-01
Distinct attentional mechanisms enhance the sensory processing of visual stimuli that appear at task-relevant locations and have task-relevant features. We used a combination of psychophysics and computational modeling to investigate how these two types of attention—spatial and feature based—interact to modulate sensitivity when combined in one task. Observers monitored overlapping groups of dots for a target change in color saturation, which they had to localize as being in the upper or lower visual hemifield. Pre-cues indicated the target's most likely location (left/right), color (red/green), or both location and color. We measured sensitivity (d′) for every combination of the location cue and the color cue, each of which could be valid, neutral, or invalid. When three competing saturation changes occurred simultaneously with the target change, there was a clear interaction: The spatial cueing effect was strongest for the cued color, and the color cueing effect was strongest at the cued location. In a second experiment, only the target dot group changed saturation, such that stimulus competition was low. The resulting cueing effects were statistically independent and additive: The color cueing effect was equally strong at attended and unattended locations. We account for these data with a computational model in which spatial and feature-based attention independently modulate the gain of sensory responses, consistent with measurements of cortical activity. Multiple responses then compete via divisive normalization. Sufficient competition creates interactions between the two cueing effects, although the attentional systems are themselves independent. This model helps reconcile seemingly disparate behavioral and physiological findings. PMID:26473316
Nelissen, Natalie; Stokes, Mark; Nobre, Anna C; Rushworth, Matthew F S
2013-10-16
Using multivoxel pattern analysis (MVPA), we studied how distributed visual representations in human occipitotemporal cortex are modulated by attention and link their modulation to concurrent activity in frontal and parietal cortex. We detected similar occipitotemporal patterns during a simple visuoperceptual task and an attention-to-working-memory task in which one or two stimuli were cued before being presented among other pictures. Pattern strength varied from highest to lowest when the stimulus was the exclusive focus of attention, a conjoint focus, and when it was potentially distracting. Although qualitatively similar effects were seen inside regions relatively specialized for the stimulus category and outside, the former were quantitatively stronger. By regressing occipitotemporal pattern strength against activity elsewhere in the brain, we identified frontal and parietal areas exerting top-down control over, or reading information out from, distributed patterns in occipitotemporal cortex. Their interactions with patterns inside regions relatively specialized for that stimulus category were higher than those with patterns outside those regions and varied in strength as a function of the attentional condition. One area, the frontal operculum, was distinguished by selectively interacting with occipitotemporal patterns only when they were the focus of attention. There was no evidence that any frontal or parietal area actively inhibited occipitotemporal representations even when they should be ignored and were suppressed. Using MVPA to decode information within these frontal and parietal areas showed that they contained information about attentional context and/or readout information from occipitotemporal cortex to guide behavior but that frontal regions lacked information about category identity.
Sleep inertia, sleep homeostatic and circadian influences on higher-order cognitive functions.
Burke, Tina M; Scheer, Frank A J L; Ronda, Joseph M; Czeisler, Charles A; Wright, Kenneth P
2015-08-01
Sleep inertia, sleep homeostatic and circadian processes modulate cognition, including reaction time, memory, mood and alertness. How these processes influence higher-order cognitive functions is not well known. Six participants completed a 73-day-long study that included two 14-day-long 28-h forced desynchrony protocols to examine separate and interacting influences of sleep inertia, sleep homeostasis and circadian phase on higher-order cognitive functions of inhibitory control and selective visual attention. Cognitive performance for most measures was impaired immediately after scheduled awakening and improved during the first ~2-4 h of wakefulness (decreasing sleep inertia); worsened thereafter until scheduled bedtime (increasing sleep homeostasis); and was worst at ~60° and best at ~240° (circadian modulation, with worst and best phases corresponding to ~09:00 and ~21:00 hours, respectively, in individuals with a habitual wake time of 07:00 hours). The relative influences of sleep inertia, sleep homeostasis and circadian phase depended on the specific higher-order cognitive function task examined. Inhibitory control appeared to be modulated most strongly by circadian phase, whereas selective visual attention for a spatial-configuration search task was modulated most strongly by sleep inertia. These findings demonstrate that some higher-order cognitive processes are differentially sensitive to different sleep-wake regulatory processes. Differential modulation of cognitive functions by different sleep-wake regulatory processes has important implications for understanding mechanisms contributing to performance impairments during adverse circadian phases, sleep deprivation and/or upon awakening from sleep. © 2015 European Sleep Research Society.
Wei, Shi-Tong; Sun, Yong-Hua; Zong, Shi-Hua
2017-09-01
The aim of the current study was to identify hub pathways of rheumatoid arthritis (RA) using a novel method based on differential pathway network (DPN) analysis. The present study proposed a DPN where protein‑protein interaction (PPI) network was integrated with pathway‑pathway interactions. Pathway data was obtained from background PPI network and the Reactome pathway database. Subsequently, pathway interactions were extracted from the pathway data by building randomized gene‑gene interactions and a weight value was assigned to each pathway interaction using Spearman correlation coefficient (SCC) to identify differential pathway interactions. Differential pathway interactions were visualized using Cytoscape to construct a DPN. Topological analysis was conducted to identify hub pathways that possessed the top 5% degree distribution of DPN. Modules of DPN were mined according to ClusterONE. A total of 855 pathways were selected to build pathway interactions. By filtrating pathway interactions of weight values >0.7, a DPN with 312 nodes and 791 edges was obtained. Topological degree analysis revealed 15 hub pathways, such as heparan sulfate/heparin‑glycosaminoglycan (HS‑GAG) degradation, HS‑GAG metabolism and keratan sulfate degradation for RA based on DPN. Furthermore, hub pathways were also important in modules, which validated the significance of hub pathways. In conclusion, the proposed method is a computationally efficient way to identify hub pathways of RA, which identified 15 hub pathways that may be potential biomarkers and provide insight to future investigation and treatment of RA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chao; Santhanagopalan, Shriram; Stock, Mark J.
Lithium-ion batteries are currently the state-of- the-art power sources for electric vehicles, and their safety behavior when subjected to abuse, such as a mechanical impact, is of critical concern. A coupled mechanical-electrical-thermal model for simulating the behavior of a lithium-ion battery under a mechanical crush has been developed. We present a series of production-quality visualizations to illustrate the complex mechanical and electrical interactions in this model.
Simultaneous chromatic and luminance human electroretinogram responses.
Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan
2012-07-01
The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing.
How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity.
Pehrs, Corinna; Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H; Kappelhoff, Hermann; Jacobs, Arthur M; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars
2014-11-01
While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Blakemore, Rebekah L; Rieger, Sebastian W; Vuilleumier, Patrik
2016-01-01
Emotions are considered to modulate action readiness. Previous studies have demonstrated increased force production following exposure to emotionally arousing visual stimuli; however the neural mechanisms underlying how precise force output is controlled within varying emotional contexts remain poorly understood. To identify the neural correlates of emotion-modulated motor behaviour, twenty-two participants produced a submaximal isometric precision-grip contraction while viewing pleasant, unpleasant, neutral or blank images (without visual feedback of force output). Force magnitude was continuously recorded together with change in brain activity using functional magnetic resonance imaging. Viewing unpleasant images resulted in reduced force decay during force maintenance as compared with pleasant, neutral and blank images. Subjective valence and arousal ratings significantly predicted force production during maintenance. Neuroimaging revealed that negative valence and its interaction with force output correlated with increased activity in right inferior frontal gyrus (rIFG), while arousal was associated with amygdala and periaqueductal gray (PAG) activation. Force maintenance alone was correlated with cerebellar activity. These data demonstrate a valence-driven modulation of force output, mediated by a cortico-subcortical network involving rIFG and PAG. These findings are consistent with engagement of motor pathways associated with aversive motivation, eliciting defensive behaviour and action preparedness in response to negative emotional signals. Copyright © 2015 Elsevier Inc. All rights reserved.
How music alters a kiss: superior temporal gyrus controls fusiform–amygdalar effective connectivity
Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H.; Kappelhoff, Hermann; Jacobs, Arthur M.; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars
2014-01-01
While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. PMID:24298171
Attention biases visual activity in visual short-term memory.
Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina
2014-07-01
In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.
Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.
Binocular adaptive optics visual simulator.
Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo
2009-09-01
A binocular adaptive optics visual simulator is presented. The instrument allows for measuring and manipulating ocular aberrations of the two eyes simultaneously, while the subject performs visual testing under binocular vision. An important feature of the apparatus consists on the use of a single correcting device and wavefront sensor. Aberrations are controlled by means of a liquid-crystal-on-silicon spatial light modulator, where the two pupils of the subject are projected. Aberrations from the two eyes are measured with a single Hartmann-Shack sensor. As an example of the potential of the apparatus for the study of the impact of the eye's aberrations on binocular vision, results of contrast sensitivity after addition of spherical aberration are presented for one subject. Different binocular combinations of spherical aberration were explored. Results suggest complex binocular interactions in the presence of monochromatic aberrations. The technique and the instrument might contribute to the better understanding of binocular vision and to the search for optimized ophthalmic corrections.
Efficient in-situ visualization of unsteady flows in climate simulation
NASA Astrophysics Data System (ADS)
Vetter, Michael; Olbrich, Stephan
2017-04-01
The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Typically, the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer. Within most traditional post-processing scenarios the simulation is done on a supercomputer whereas the data analysis and visualization is done on a graphics workstation. That way temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. The solution to this issue is the avoidance of temporary storage, or at least significant reduction of data complexity. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP) at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we develop and integrate an in-situ approach. Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. It couples the mapping process directly to the simulation by calling methods of a parallelized data extraction library, which create a time-based sequence of geometric 3D scenes. This sequence is stored on a special streaming server with an interactive post-filtering option and then played-out asynchronously in a separate 3D viewer application. Since the rendering is part of this viewer application, the scenes can be navigated interactively. In contrast to other in-situ approaches where 2D images are created as part of the simulation or synchronous co-visualization takes place, our method supports interaction in 3D space and in time, as well as fixed frame rates. To integrate in-situ processing based on our DSVR framework and methods in the ICON climate model, we are continuously evolving the data structures and mapping algorithms of the framework to support the ICON model's native grid structures, since DSVR originally was designed for rectilinear grids only. We now have implemented a new output module to ICON to take advantage of the DSVR visualization. The visualization can be configured as most output modules by using a specific namelist and is exemplarily integrated within the non-hydrostatic atmospheric model time loop. With the integration of a DSVR based in-situ pathline extraction within ICON, a further milestone is reached. The pathline algorithm as well as the grid data structures have been optimized for the domain decomposition used for the parallelization of ICON based on MPI and OpenMP. The software implementation and evaluation is done on the supercomputers at DKRZ. In principle, the data complexity is reduced from O(n3) to O(m), where n is the grid resolution and m the number of supporting point of all pathlines. The stability and scalability evaluation is done using Atmospheric Model Intercomparison Project (AMIP) runs. We will give a short introduction in our software framework, as well as a short overview on the implementation and usage of DSVR within ICON. Furthermore, we will present visualization and evaluation results of sample applications.
Simulation and visualization of energy-related occupant behavior in office buildings
Chen, Yixing; Liang, Xin; Hong, Tianzhen; ...
2017-03-15
In current building performance simulation programs, occupant presence and interactions with building systems are over-simplified and less indicative of real world scenarios, contributing to the discrepancies between simulated and actual energy use in buildings. Simulation results are normally presented using various types of charts. However, using those charts, it is difficult to visualize and communicate the importance of occupants’ behavior to building energy performance. This study introduced a new approach to simulating and visualizing energy-related occupant behavior in office buildings. First, the Occupancy Simulator was used to simulate the occupant presence and movement and generate occupant schedules for each spacemore » as well as for each occupant. Then an occupant behavior functional mockup unit (obFMU) was used to model occupant behavior and analyze their impact on building energy use through co-simulation with EnergyPlus. Finally, an agent-based model built upon AnyLogic was applied to visualize the simulation results of the occupant movement and interactions with building systems, as well as the related energy performance. A case study using a small office building in Miami, FL was presented to demonstrate the process and application of the Occupancy Simulator, the obFMU and EnergyPlus, and the AnyLogic module in simulation and visualization of energy-related occupant behaviors in office buildings. Furthermore, the presented approach provides a new detailed and visual way for policy makers, architects, engineers and building operators to better understand occupant energy behavior and their impact on energy use in buildings, which can improve the design and operation of low energy buildings.« less
Simulation and visualization of energy-related occupant behavior in office buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Liang, Xin; Hong, Tianzhen
In current building performance simulation programs, occupant presence and interactions with building systems are over-simplified and less indicative of real world scenarios, contributing to the discrepancies between simulated and actual energy use in buildings. Simulation results are normally presented using various types of charts. However, using those charts, it is difficult to visualize and communicate the importance of occupants’ behavior to building energy performance. This study introduced a new approach to simulating and visualizing energy-related occupant behavior in office buildings. First, the Occupancy Simulator was used to simulate the occupant presence and movement and generate occupant schedules for each spacemore » as well as for each occupant. Then an occupant behavior functional mockup unit (obFMU) was used to model occupant behavior and analyze their impact on building energy use through co-simulation with EnergyPlus. Finally, an agent-based model built upon AnyLogic was applied to visualize the simulation results of the occupant movement and interactions with building systems, as well as the related energy performance. A case study using a small office building in Miami, FL was presented to demonstrate the process and application of the Occupancy Simulator, the obFMU and EnergyPlus, and the AnyLogic module in simulation and visualization of energy-related occupant behaviors in office buildings. Furthermore, the presented approach provides a new detailed and visual way for policy makers, architects, engineers and building operators to better understand occupant energy behavior and their impact on energy use in buildings, which can improve the design and operation of low energy buildings.« less
Beyond perceptual expertise: revisiting the neural substrates of expert object recognition
Harel, Assaf; Kravitz, Dwight; Baker, Chris I.
2013-01-01
Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134
On lamps, walls, and eyes: The spectral radiance field and the evaluation of light pollution indoors
NASA Astrophysics Data System (ADS)
Bará, Salvador; Escofet, Jaume
2018-01-01
Light plays a key role in the regulation of different physiological processes, through several visual and non-visual retinal phototransduction channels whose basic features are being unveiled by recent research. The growing body of evidence on the significance of these effects has sparked a renewed interest in the determination of the light field at the entrance pupil of the eye in indoor spaces. Since photic interactions are strongly wavelength-dependent, a significant effort is being devoted to assess the relative merits of the spectra of the different types of light sources available for use at home and in the workplace. The spectral content of the light reaching the observer eyes in indoor spaces, however, does not depend exclusively on the sources: it is partially modulated by the spectral reflectance of the walls and surrounding surfaces, through the multiple reflections of the light beams along all possible paths from the source to the observer. This modulation can modify significantly the non-visual photic inputs that would be produced by the lamps alone, and opens the way for controlling-to a certain extent-the subject's exposure to different regions of the optical spectrum. In this work we evaluate the expected magnitude of this effect and we show that, for factorizable sources, the spectral modulation can be conveniently described in terms of a set of effective filter-like functions that provide useful insights for lighting design and light pollution assessment. The radiance field also provides a suitable bridge between indoor and outdoor light pollution studies.
Postdictive modulation of visual orientation.
Kawabe, Takahiro
2012-01-01
The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.
Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel
2017-01-01
Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Kellermann, Tanja S; Bonilha, Leonardo; Eskandari, Ramin; Garcia-Ramos, Camille; Lin, Jack J; Hermann, Bruce P
2016-10-01
Normal cognitive function is defined by harmonious interaction among multiple neuropsychological domains. Epilepsy has a disruptive effect on cognition, but how diverse cognitive abilities differentially interact with one another compared with healthy controls (HC) is unclear. This study used graph theory to analyze the community structure of cognitive networks in adults with temporal lobe epilepsy (TLE) compared with that in HC. Neuropsychological assessment was performed in 100 patients with TLE and 82 HC. For each group, an adjacency matrix was constructed representing pair-wise correlation coefficients between raw scores obtained in each possible test combination. For each cognitive network, each node corresponded to a cognitive test; each link corresponded to the correlation coefficient between tests. Global network structure, community structure, and node-wise graph theory properties were qualitatively assessed. The community structure in patients with TLE was composed of fewer, larger, more mixed modules, characterizing three main modules representing close relationships between the following: 1) aspects of executive function (EF), verbal and visual memory, 2) speed and fluency, and 3) speed, EF, perception, language, intelligence, and nonverbal memory. Conversely, controls exhibited a relative division between cognitive functions, segregating into more numerous, smaller modules consisting of the following: 1) verbal memory, 2) language, perception, and intelligence, 3) speed and fluency, and 4) visual memory and EF. Overall node-wise clustering coefficient and efficiency were increased in TLE. Adults with TLE demonstrate a less clear and poorly structured segregation between multiple cognitive domains. This panorama suggests a higher degree of interdependency across multiple cognitive domains in TLE, possibly indicating compensatory mechanisms to overcome functional impairments. Copyright © 2016 Elsevier Inc. All rights reserved.
Functional modular architecture underlying attentional control in aging.
Monge, Zachary A; Geib, Benjamin R; Siciliano, Rachel E; Packard, Lauren E; Tallman, Catherine W; Madden, David J
2017-07-15
Previous research suggests that age-related differences in attention reflect the interaction of top-down and bottom-up processes, but the cognitive and neural mechanisms underlying this interaction remain an active area of research. Here, within a sample of community-dwelling adults 19-78 years of age, we used diffusion reaction time (RT) modeling and multivariate functional connectivity to investigate the behavioral components and whole-brain functional networks, respectively, underlying bottom-up and top-down attentional processes during conjunction visual search. During functional MRI scanning, participants completed a conjunction visual search task in which each display contained one item that was larger than the other items (i.e., a size singleton) but was not informative regarding target identity. This design allowed us to examine in the RT components and functional network measures the influence of (a) additional bottom-up guidance when the target served as the size singleton, relative to when the distractor served as the size singleton (i.e., size singleton effect) and (b) top-down processes during target detection (i.e., target detection effect; target present vs. absent trials). We found that the size singleton effect (i.e., increased bottom-up guidance) was associated with RT components related to decision and nondecision processes, but these effects did not vary with age. Also, a modularity analysis revealed that frontoparietal module connectivity was important for both the size singleton and target detection effects, but this module became central to the networks through different mechanisms for each effect. Lastly, participants 42 years of age and older, in service of the target detection effect, relied more on between-frontoparietal module connections. Our results further elucidate mechanisms through which frontoparietal regions support attentional control and how these mechanisms vary in relation to adult age. Copyright © 2017 Elsevier Inc. All rights reserved.
Pannebakker, Merel M; Jolicœur, Pierre; van Dam, Wessel O; Band, Guido P H; Ridderinkhof, K Richard; Hommel, Bernhard
2011-09-01
Dual tasks and their associated delays have often been used to examine the boundaries of processing in the brain. We used the dual-task procedure and recorded event-related potentials (ERPs) to investigate how mental rotation of a first stimulus (S1) influences the shifting of visual-spatial attention to a second stimulus (S2). Visual-spatial attention was monitored by using the N2pc component of the ERP. In addition, we examined the sustained posterior contralateral negativity (SPCN) believed to index the retention of information in visual short-term memory. We found modulations of both the N2pc and the SPCN, suggesting that engaging mechanisms of mental rotation impairs the deployment of visual-spatial attention and delays the passage of a representation of S2 into visual short-term memory. Both results suggest interactions between mental rotation and visual-spatial attention in capacity-limited processing mechanisms indicating that response selection is not pivotal in dual-task delays and all three processes are likely to share a common resource like executive control. Copyright © 2011 Elsevier Ltd. All rights reserved.
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Coherent modulation of stimulus colour can affect visually induced self-motion perception.
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2010-01-01
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
The rhodopsin-arrestin-1 interaction in bicelles.
Chen, Qiuyan; Vishnivetskiy, Sergey A; Zhuang, Tiandi; Cho, Min-Kyu; Thaker, Tarjani M; Sanders, Charles R; Gurevich, Vsevolod V; Iverson, T M
2015-01-01
G-protein-coupled receptors (GPCRs) are essential mediators of information transfer in eukaryotic cells. Interactions between GPCRs and their binding partners modulate the signaling process. For example, the interaction between GPCR and cognate G protein initiates the signal, while the interaction with cognate arrestin terminates G-protein-mediated signaling. In visual signal transduction, arrestin-1 selectively binds to the phosphorylated light-activated GPCR rhodopsin to terminate rhodopsin signaling. Under physiological conditions, the rhodopsin-arrestin-1 interaction occurs in highly specialized disk membrane in which rhodopsin resides. This membrane is replaced with mimetics when working with purified proteins. While detergents are commonly used as membrane mimetics, most detergents denature arrestin-1, preventing biochemical studies of this interaction. In contrast, bicelles provide a suitable alternative medium. An advantage of bicelles is that they contain lipids, which have been shown to be necessary for normal rhodopsin-arrestin-1 interaction. Here we describe how to reconstitute rhodopsin into bicelles, and how bicelle properties affect the rhodopsin-arrestin-1 interaction.
The Rhodopsin-Arrestin-1 Interaction in Bicelles
Chen, Qiuyan; Vishnivetskiy, Sergey A.; Zhuang, Tiandi; Cho, Min-Kyu; Thaker, Tarjani M.; Sanders, Charles R.; Gurevich, Vsevolod V.; Iverson, T. M.
2015-01-01
G-protein-coupled receptors (GPCRs) are essential mediators of information transfer in eukaryotic cells. Interactions between GPCRs and their binding partners modulate the signaling process. For example, the interaction between GPCR and cognate G protein initiates the signal, while the interaction with cognate arrestin terminates G-protein-mediated signaling. In visual signal transduction, arrestin-1 selectively binds to the phosphorylated light-activated GPCR rhodopsin to terminate rhodopsin signaling. Under physiological conditions, the rhodopsin-arrestin-1 interaction occurs in highly specialized disk membrane in which rhodopsin resides. This membrane is replaced with mimetics when working with purified proteins. While detergents are commonly used as membrane mimetics, most detergents denature arrestin-1, preventing biochemical studies of this interaction. In contrast, bicelles provide a suitable alternative medium. An advantage of bicelles is that they contain lipids, which have been shown to be necessary for normal rhodopsin-arrestin-1 interaction. Here we describe how to reconstitute rhodopsin into bicelles, and how bicelle properties affect the rhodopsin-arrestin-1 interaction. PMID:25697518
Investigation of candidate genes for osteoarthritis based on gene expression profiles.
Dong, Shuanghai; Xia, Tian; Wang, Lei; Zhao, Qinghua; Tian, Jiwei
2016-12-01
To explore the mechanism of osteoarthritis (OA) and provide valid biological information for further investigation. Gene expression profile of GSE46750 was downloaded from Gene Expression Omnibus database. The Linear Models for Microarray Data (limma) package (Bioconductor project, http://www.bioconductor.org/packages/release/bioc/html/limma.html) was used to identify differentially expressed genes (DEGs) in inflamed OA samples. Gene Ontology function enrichment analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways enrichment analysis of DEGs were performed based on Database for Annotation, Visualization and Integrated Discovery data, and protein-protein interaction (PPI) network was constructed based on the Search Tool for the Retrieval of Interacting Genes/Proteins database. Regulatory network was screened based on Encyclopedia of DNA Elements. Molecular Complex Detection was used for sub-network screening. Two sub-networks with highest node degree were integrated with transcriptional regulatory network and KEGG functional enrichment analysis was processed for 2 modules. In total, 401 up- and 196 down-regulated DEGs were obtained. Up-regulated DEGs were involved in inflammatory response, while down-regulated DEGs were involved in cell cycle. PPI network with 2392 protein interactions was constructed. Moreover, 10 genes including Interleukin 6 (IL6) and Aurora B kinase (AURKB) were found to be outstanding in PPI network. There are 214 up- and 8 down-regulated transcription factor (TF)-target pairs in the TF regulatory network. Module 1 had TFs including SPI1, PRDM1, and FOS, while module 2 contained FOSL1. The nodes in module 1 were enriched in chemokine signaling pathway, while the nodes in module 2 were mainly enriched in cell cycle. The screened DEGs including IL6, AGT, and AURKB might be potential biomarkers for gene therapy for OA by being regulated by TFs such as FOS and SPI1, and participating in the cell cycle and cytokine-cytokine receptor interaction pathway. Copyright © 2016 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.
Applications of systems approaches in the study of rheumatic diseases.
Kim, Ki-Jo; Lee, Saseong; Kim, Wan-Uk
2015-03-01
The complex interaction of molecules within a biological system constitutes a functional module. These modules are then acted upon by both internal and external factors, such as genetic and environmental stresses, which under certain conditions can manifest as complex disease phenotypes. Recent advances in high-throughput biological analyses, in combination with improved computational methods for data enrichment, functional annotation, and network visualization, have enabled a much deeper understanding of the mechanisms underlying important biological processes by identifying functional modules that are temporally and spatially perturbed in the context of disease development. Systems biology approaches such as these have produced compelling observations that would be impossible to replicate using classical methodologies, with greater insights expected as both the technology and methods improve in the coming years. Here, we examine the use of systems biology and network analysis in the study of a wide range of rheumatic diseases to better understand the underlying molecular and clinical features.
NASA Astrophysics Data System (ADS)
Pani, R.; Gonzalez, A. J.; Bettiol, M.; Fabbri, A.; Cinti, M. N.; Preziosi, E.; Borrazzo, C.; Conde, P.; Pellegrini, R.; Di Castro, E.; Majewski, S.
2015-06-01
The proposal of Mindview European Project concerns with the development of a very high resolution and high efficiency brain dedicated PET scanner simultaneously working with a Magnetic Resonance scanner, that expects to visualize neurotransmitter pathways and their disruptions in the quest to better diagnose schizophrenia. On behalf of this project, we propose a low cost PET module for the first prototype, based on monolithic crystals, suitable to be integrated with a head Radio Frequency (RF) coil. The aim of the suggested module is to achieve high performances in terms of efficiency, planar spatial resolution (expected about 1 mm) and discrimination of gamma Depth Of Interaction (DOI) in order to reduce the parallax error. Our preliminary results are very promising: a DOI resolution of about 3 mm, a spatial resolution ranging from about 1 to 1.5 mm and a good position linearity.
Age-related audiovisual interactions in the superior colliculus of the rat.
Costa, M; Piché, M; Lepore, F; Guillemot, J-P
2016-04-21
It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Interactive multimedia for prenatal ultrasound training.
Lee, W; Ault, H; Kirk, J S; Comstock, C H
1995-01-01
This demonstration project examines the utility of interactive multimedia for prenatal ultrasound training. A laser-disc library was linked to a three-dimensional (3-D) heart model and other computer-based training materials through interactive multimedia. A testing module presented ultrasound anomalies and related questions to house-staff physicians through the image library. Users were asked to evaluate these training materials on the basis of perceived instructional value, question content, subjects covered, graphics interface, and ease of use; users were also asked for their comments. House-staff physicians indicated that they consider interactive multimedia to be a helpful adjunct to their core fetal imaging rotation. During a 9-month period, 16 house-staff physicians correctly diagnosed 78 +/- 4% of unknown cases presented through the testing module. The 3-D heart model was also perceived to be a useful teaching aid for spatial orientation skills. Our findings suggest that interactive multimedia and volume visualization models can be used to supplement traditional prenatal ultrasound training. The system provides a broad exposure to ultrasound anomalies, increases opportunities for postnatal correlation, emphasizes motion video for ultrasound training, encourages development of independent diagnostic ability, and helps physicians understand anatomic orientation. We hypothesize that interactive multimedia-based tutorials provide a better overall training experience for house-staff physicians. However, these supplementary methods will require formal evaluation of effectiveness to better understand their potential educational impact.
Interaction between dorsal and ventral processing streams: where, when and how?
Cloutman, Lauren L
2013-11-01
The execution of complex visual, auditory, and linguistic behaviors requires a dynamic interplay between spatial ('where/how') and non-spatial ('what') information processed along the dorsal and ventral processing streams. However, while it is acknowledged that there must be some degree of interaction between the two processing networks, how they interact, both anatomically and functionally, is a question which remains little explored. The current review examines the anatomical, temporal, and behavioral evidence regarding three potential models of dual stream interaction: (1) computations along the two pathways proceed independently and in parallel, reintegrating within shared target brain regions; (2) processing along the separate pathways is modulated by the existence of recurrent feedback loops; and (3) information is transferred directly between the two pathways at multiple stages and locations along their trajectories. Copyright © 2012 Elsevier Inc. All rights reserved.
Emotional modulation of visual remapping of touch.
Cardini, Flavia; Bertini, Caterina; Serino, Andrea; Ladavas, Elisabetta
2012-10-01
The perception of tactile stimuli on the face is modulated if subjects concurrently observe a face being touched; this effect is termed "visual remapping of touch" or the VRT effect. Given the high social value of this mechanism, we investigated whether it might be modulated by specific key information processed in face-to-face interactions: facial emotional expression. In two separate experiments, participants received tactile stimuli, near the perceptual threshold, either on their right, left, or both cheeks. Concurrently, they watched several blocks of movies depicting a face with a neutral, happy, or fearful expression that was touched or just approached by human fingers (Experiment 1). Participants were asked to distinguish between unilateral and bilateral felt tactile stimulation. Tactile perception was enhanced when viewing touch toward a fearful face compared with viewing touch toward the other two expressions. In order to test whether this result can be generalized to other negative emotions or whether it is a fear-specific effect, we ran a second experiment, where participants watched movies of faces-touched or approached by fingers-with either a fearful or an angry expression (Experiment 2). In line with the first experiment, tactile perception was enhanced when subjects viewed touch toward a fearful face and not toward an angry face. Results of the present experiments are interpreted in light of different mechanisms underlying different emotions recognition, with a specific involvement of the somatosensory system when viewing a fearful expression and a resulting fear-specific modulation of the VRT effect.
The mere exposure effect is modulated by selective attention but not visual awareness.
Huang, Yu-Feng; Hsieh, Po-Jang
2013-10-18
Repeated exposures to an object will lead to an enhancement of evaluation toward that object. Although this mere exposure effect may occur when the objects are presented subliminally, the role of conscious perception per se on evaluation has never been examined. Here we use a binocular rivalry paradigm to investigate whether a variance in conscious perceptual duration of faces has an effect on their subsequent evaluation, and how selective attention and memory interact with this effect. Our results show that face evaluation is positively biased by selective attention but not affected by visual awareness. Furthermore, this effect is not due to participants recalling which face had been attended to. Copyright © 2013 Elsevier Ltd. All rights reserved.
Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.
Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346
Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I
2017-06-01
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.
Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.
2017-01-01
Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201
Urooj, Uzma; Cornelissen, Piers L; Simpson, Michael I G; Wheat, Katherine L; Woods, Will; Barca, Laura; Ellis, Andrew W
2014-02-15
The age of acquisition (AoA) of objects and their names is a powerful determinant of processing speed in adulthood, with early-acquired objects being recognized and named faster than late-acquired objects. Previous research using fMRI (Ellis et al., 2006. Traces of vocabulary acquisition in the brain: evidence from covert object naming. NeuroImage 33, 958-968) found that AoA modulated the strength of BOLD responses in both occipital and left anterior temporal cortex during object naming. We used magnetoencephalography (MEG) to explore in more detail the nature of the influence of AoA on activity in those two regions. Covert object naming recruited a network within the left hemisphere that is familiar from previous research, including visual, left occipito-temporal, anterior temporal and inferior frontal regions. Region of interest (ROI) analyses found that occipital cortex generated a rapid evoked response (~75-200 ms at 0-40 Hz) that peaked at 95 ms but was not modulated by AoA. That response was followed by a complex of later occipital responses that extended from ~300 to 850 ms and were stronger to early- than late-acquired items from ~325 to 675 ms at 10-20 Hz in the induced rather than the evoked component. Left anterior temporal cortex showed an evoked response that occurred significantly later than the first occipital response (~100-400 ms at 0-10 Hz with a peak at 191 ms) and was stronger to early- than late-acquired items from ~100 to 300 ms at 2-12 Hz. A later anterior temporal response from ~550 to 1050 ms at 5-20 Hz was not modulated by AoA. The results indicate that the initial analysis of object forms in visual cortex is not influenced by AoA. A fastforward sweep of activation from occipital and left anterior temporal cortex then results in stronger activation of semantic representations for early- than late-acquired objects. Top-down re-activation of occipital cortex by semantic representations is then greater for early than late acquired objects resulting in delayed modulation of the visual response. Copyright © 2013 Elsevier Inc. All rights reserved.
FamNet: A Framework to Identify Multiplied Modules Driving Pathway Expansion in Plants1
Tohge, Takayuki; Klie, Sebastian; Fernie, Alisdair R.
2016-01-01
Gene duplications generate new genes that can acquire similar but often diversified functions. Recent studies of gene coexpression networks have indicated that, not only genes, but also pathways can be multiplied and diversified to perform related functions in different parts of an organism. Identification of such diversified pathways, or modules, is needed to expand our knowledge of biological processes in plants and to understand how biological functions evolve. However, systematic explorations of modules remain scarce, and no user-friendly platform to identify them exists. We have established a statistical framework to identify modules and show that approximately one-third of the genes of a plant’s genome participate in hundreds of multiplied modules. Using this framework as a basis, we implemented a platform that can explore and visualize multiplied modules in coexpression networks of eight plant species. To validate the usefulness of the platform, we identified and functionally characterized pollen- and root-specific cell wall modules that multiplied to confer tip growth in pollen tubes and root hairs, respectively. Furthermore, we identified multiplied modules involved in secondary metabolite synthesis and corroborated them by metabolite profiling of tobacco (Nicotiana tabacum) tissues. The interactive platform, referred to as FamNet, is available at http://www.gene2function.de/famnet.html. PMID:26754669
Neocortical Rebound Depolarization Enhances Visual Perception
Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji
2015-01-01
Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866
A Hierarchical Visualization Analysis Model of Power Big Data
NASA Astrophysics Data System (ADS)
Li, Yongjie; Wang, Zheng; Hao, Yang
2018-01-01
Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.
Performance degradation of grid-tied photovoltaic modules in a hot-dry climatic condition
NASA Astrophysics Data System (ADS)
Suleske, Adam; Singh, Jaspreet; Kuitche, Joseph; Tamizh-Mani, Govindasamy
2011-09-01
The crystalline silicon photovoltaic (PV) modules under open circuit conditions typically degrade at a rate of about 0.5% per year. However, it is suspected that the modules in an array level may degrade, depending on equipment/frame grounding and array grounding, at higher rates because of higher string voltage and increased module mismatch over the years of operation in the field. This paper compares and analyzes the degradation rates of grid-tied photovoltaic modules operating over 10-17 years in a desert climatic condition of Arizona. The nameplate open-circuit voltages of the arrays ranged between 400 and 450 V. Six different types/models of crystalline silicon modules with glass/glass and glass/polymer constructions were evaluated. About 1865 modules were inspected using an extended visual inspection checklist and infrared (IR) scanning. The visual inspection checklist included encapsulant discoloration, cell/interconnect cracks, delamination and corrosion. Based on the visual inspection and IR studies, a large fraction of these modules were identified as allegedly healthy and unhealthy modules and they were electrically isolated from the system for currentvoltage (I-V) measurements of individual modules. The annual degradation rate for each module type is determined based on the I-V measurements.
Zavaglia, Melissa; Hilgetag, Claus C
2016-06-01
Spatial attention is a prime example for the distributed network functions of the brain. Lesion studies in animal models have been used to investigate intact attentional mechanisms as well as perspectives for rehabilitation in the injured brain. Here, we systematically analyzed behavioral data from cooling deactivation and permanent lesion experiments in the cat, where unilateral deactivation of the posterior parietal cortex (in the vicinity of the posterior middle suprasylvian cortex, pMS) or the superior colliculus (SC) cause a severe neglect in the contralateral hemifield. Counterintuitively, additional deactivation of structures in the opposite hemisphere reverses the deficit. Using such lesion data, we employed a game-theoretical approach, multi-perturbation Shapley value analysis (MSA), for inferring functional contributions and network interactions of bilateral pMS and SC from behavioral performance in visual attention studies. The approach provides an objective theoretical strategy for lesion inferences and allows a unique quantitative characterization of regional functional contributions and interactions on the basis of multi-perturbations. The quantitative analysis demonstrated that right posterior parietal cortex and superior colliculus made the strongest positive contributions to left-field orienting, while left brain regions had negative contributions, implying that their perturbation may reverse the effects of contralateral lesions or improve normal function. An analysis of functional modulations and interactions among the regions revealed redundant interactions (implying functional overlap) between regions within each hemisphere, and synergistic interactions between bilateral regions. To assess the reliability of the MSA method in the face of variable and incomplete input data, we performed a sensitivity analysis, investigating how much the contribution values of the four regions depended on the performance of specific configurations and on the prediction of unknown performances. The results suggest that the MSA approach is sensitive to categorical, but insensitive to gradual changes in the input data. Finally, we created a basic network model that was based on the known anatomical interactions among cortical-tectal regions and reproduced the experimentally observed behavior in visual orienting. We discuss the structural organization of the network model relative to the causal modulations identified by MSA, to aid a mechanistic understanding of the attention network of the brain.
Successes and Failures Teaching Visual Ethics: A Class Study
ERIC Educational Resources Information Center
Roundtree, Aimee Kendall
2010-01-01
This article discusses and evaluates the inclusion of ethics learning modules in a graduate- level visual design theory course. Modules were designed as a part of an NEH grant. Students grappled with case studies that probed the ethics of visuals at the crux of the BP oil refinery accident, NASA space shuttle disasters, the Enron collapse, and…
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
Visualizing Mobility of Public Transportation System.
Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin
2014-12-01
Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.
ERIC Educational Resources Information Center
Melaku, Samuel; Schreck, James O.; Griffin, Kameron; Dabke, Rajeev B.
2016-01-01
Interlocking toy building blocks (e.g., Lego) as chemistry learning modules for blind and visually impaired (BVI) students in high school and undergraduate introductory or general chemistry courses are presented. Building blocks were assembled on a baseplate to depict the relative changes in the periodic properties of elements. Modules depicting…
Dagnino, Bruno; Gariel-Mathis, Marie-Alice
2014-01-01
Previous transcranial magnetic stimulation (TMS) studies suggested that feedback from higher to lower areas of the visual cortex is important for the access of visual information to awareness. However, the influence of cortico-cortical feedback on awareness and the nature of the feedback effects are not yet completely understood. In the present study, we used electrical microstimulation in the visual cortex of monkeys to test the hypothesis that cortico-cortical feedback plays a role in visual awareness. We investigated the interactions between the primary visual cortex (V1) and area V4 by applying microstimulation in both cortical areas at various delays. We report that the monkeys detected the phosphenes produced by V1 microstimulation but subthreshold V4 microstimulation did not influence V1 phosphene detection thresholds. A second experiment examined the influence of V4 microstimulation on the monkeys' ability to detect the dimming of one of three peripheral visual stimuli. Again, microstimulation of a group of V4 neurons failed to modulate the monkeys' perception of a stimulus in their receptive field. We conclude that conditions exist where microstimulation of area V4 has only a limited influence on visual perception. PMID:25392172
Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R
2015-02-01
Previous transcranial magnetic stimulation (TMS) studies suggested that feedback from higher to lower areas of the visual cortex is important for the access of visual information to awareness. However, the influence of cortico-cortical feedback on awareness and the nature of the feedback effects are not yet completely understood. In the present study, we used electrical microstimulation in the visual cortex of monkeys to test the hypothesis that cortico-cortical feedback plays a role in visual awareness. We investigated the interactions between the primary visual cortex (V1) and area V4 by applying microstimulation in both cortical areas at various delays. We report that the monkeys detected the phosphenes produced by V1 microstimulation but subthreshold V4 microstimulation did not influence V1 phosphene detection thresholds. A second experiment examined the influence of V4 microstimulation on the monkeys' ability to detect the dimming of one of three peripheral visual stimuli. Again, microstimulation of a group of V4 neurons failed to modulate the monkeys' perception of a stimulus in their receptive field. We conclude that conditions exist where microstimulation of area V4 has only a limited influence on visual perception. Copyright © 2015 the American Physiological Society.
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
Schneider, Werner X.
2013-01-01
The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes. PMID:24018722
Interactions of attention, emotion and motivation.
Raymond, Jane
2009-01-01
Although successful visually guided action begins with sensory processes and ends with motor control, the intervening processes related to the appropriate selection of information for processing are especially critical because of the brain's limited capacity to handle information. Three important mechanisms--attention, emotion and motivation--contribute to the prioritization and selection of information. In this chapter, the interplay between these systems is discussed with emphasis placed on interactions between attention (or immediate task relevance of stimuli) and emotion (or affective evaluation of stimuli), and between attention and motivation (or the predicted value of stimuli). Although numerous studies have shown that emotional stimuli modulate mechanisms of selective attention in humans, little work has been directed at exploring whether such interactions can be reciprocal, that is, whether attention can influence emotional response. Recent work on this question (showing that distracting information is typically devalued upon later encounters) is reviewed in the first half of the chapter. In the second half, some recent experiments exploring how prior value-prediction learning (i.e., learning to associate potential outcomes, good or bad, with specific stimuli) plays a role in visual selection and conscious perception. The results indicate that some aspects of motivation act on selection independently of traditionally defined attention and other aspects interact with it.
Rational Tuning of Visual Cycle Modulator Pharmacodynamics
Kiser, Philip D.; Zhang, Jianye; Badiee, Mohsen; Kinoshita, Junzo; Peachey, Neal S.; Tochtrop, Gregory P.
2017-01-01
Modulators of the visual cycle have been developed for treatment of various retinal disorders. These agents were designed to inhibit retinoid isomerase [retinal pigment epithelium-specific 65 kDa protein (RPE65)], the rate-limiting enzyme of the visual cycle, based on the idea that attenuation of visual pigment regeneration could reduce formation of toxic retinal conjugates. Of these agents, certain ones that contain primary amine groups can also reversibly form retinaldehyde Schiff base adducts, which contributes to their retinal protective activity. Direct inhibition of RPE65 as a therapeutic strategy is complicated by adverse effects resulting from slowed chromophore regeneration, whereas effective retinal sequestration can require high drug doses with potential off-target effects. We hypothesized that the RPE65-emixustat crystal structure could help guide the design of retinaldehyde-sequestering agents with varying degrees of RPE65 inhibitory activity. We found that addition of an isopropyl group to the central phenyl ring of emixustat and related compounds resulted in agents effectively lacking in vitro retinoid isomerase inhibitory activity, whereas substitution of the terminal 6-membered ring with branched moieties capable of stronger RPE65 interaction potentiated inhibition. The isopropyl derivative series produced discernible visual cycle suppression in vivo, albeit much less potently than compounds with a high affinity for the RPE65 active site. These agents were distributed into the retina and formed Schiff base adducts with retinaldehyde. Except for one compound [3-amino-1-(3-isopropyl-5-((2,6,6-trimethylcyclohex-1-en-1-yl)methoxy)phenyl)propan-1-ol (MB-007)], these agents conferred protection against retinal phototoxicity, suggesting that both direct RPE65 inhibition and retinal sequestration are mechanisms of potential therapeutic relevance. PMID:28476927
Brain signal complexity rises with repetition suppression in visual learning.
Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah
2016-06-21
Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Data Fusion and Visualization with the OpenEarth Framework (OEF)
NASA Astrophysics Data System (ADS)
Nadeau, D. R.; Baru, C.; Fouch, M. J.; Crosby, C. J.
2010-12-01
Data fusion is an increasingly important problem to solve as we strive to integrate data from multiple sources and build better models of the complex processes operating at the Earth’s surface and its interior. These data are often large, multi-dimensional, and subject to differing conventions for file formats, data structures, coordinate spaces, units of measure, and metadata organization. When visualized, these data require differing, and often conflicting, conventions for visual representations, dimensionality, icons, color schemes, labeling, and interaction. These issues make the visualization of fused Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data fusion and visualization suite of software being developed at the Supercomputer Center at the University of California, San Diego. Funded by the NSF, the project is leveraging virtual globe technology from NASA’s WorldWind to create interactive 3D visualization tools that combine layered data from a variety of sources to create a holistic view of features at, above, and beneath the Earth’s surface. The OEF architecture is cross-platform, multi-threaded, modular, and based upon Java. The OEF’s modular approach yields a collection of compatible mix-and-match components for assembling custom applications. Available modules support file format handling, web service communications, data management, data filtering, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats. Each one imports data into a general-purpose data representation that supports multidimensional grids, topography, points, lines, polygons, images, and more. From there these data then may be manipulated, merged, filtered, reprojected, and visualized. Visualization features support conventional and new visualization techniques for looking at topography, tomography, maps, and feature geometry. 3D grid data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery along with data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers and a common 3D+time coordinate space. Data management within the OEF handles and hides the quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Derived data are computed automatically to support interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization is accelerated by 3D graphics hardware found on today’s computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.
Top-down modulation of visual and auditory cortical processing in aging.
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
2015-02-01
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
Degraded attentional modulation of cortical neural populations in strabismic amblyopia
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628
Degraded attentional modulation of cortical neural populations in strabismic amblyopia.
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.
Relation of visual creative imagery manipulation to resting-state brain oscillations.
Cai, Yuxuan; Zhang, Delong; Liang, Bishan; Wang, Zengjian; Li, Junchao; Gao, Zhenni; Gao, Mengxia; Chang, Song; Jiao, Bingqing; Huang, Ruiwang; Liu, Ming
2018-02-01
Visual creative imagery (VCI) manipulation is the key component of visual creativity; however, it remains largely unclear how it occurs in the brain. The present study investigated the brain neural response to VCI manipulation and its relation to intrinsic brain activity. We collected functional magnetic resonance imaging (fMRI) datasets related to a VCI task and a control task as well as pre- and post-task resting states in sequential sessions. A general linear model (GLM) was subsequently used to assess the specific activation of the VCI task compared with the control task. The changes in brain oscillation amplitudes across the pre-, on-, and post-task states were measured to investigate the modulation of the VCI task. Furthermore, we applied a Granger causal analysis (GCA) to demonstrate the dynamic neural interactions that underlie the modulation effect. We determined that the VCI task specifically activated the left inferior frontal gyrus pars triangularis (IFGtriang) and the right superior frontal gyrus (SFG), as well as the temporoparietal areas, including the left inferior temporal gyrus, right precuneus, and bilateral superior parietal gyrus. Furthermore, the VCI task modulated the intrinsic brain activity of the right IFGtriang (0.01-0.08 Hz) and the left caudate nucleus (0.2-0.25 Hz). Importantly, an inhibitory effect (negative) may exist from the left SFG to the right IFGtriang in the on-VCI task state, in the frequency of 0.01-0.08 Hz, whereas this effect shifted to an excitatory effect (positive) in the subsequent post-task resting state. Taken together, the present findings provide experimental evidence for the existence of a common mechanism that governs the brain activity of many regions at resting state and whose neural activity may engage during the VCI manipulation task, which may facilitate an understanding of the neural substrate of visual creativity.
How visual short-term memory maintenance modulates subsequent visual aftereffects.
Saad, Elyana; Silvanto, Juha
2013-05-01
Prolonged viewing of a visual stimulus can result in sensory adaptation, giving rise to perceptual phenomena such as the tilt aftereffect (TAE). However, it is not known if short-term memory maintenance induces such effects. We examined how visual short-term memory (VSTM) maintenance modulates the strength of the TAE induced by subsequent visual adaptation. We reasoned that if VSTM maintenance induces aftereffects on subsequent encoding of visual information, then it should either enhance or reduce the TAE induced by a subsequent visual adapter, depending on the congruency of the memory cue and the adapter. Our results were consistent with this hypothesis and thus indicate that the effects of VSTM maintenance can outlast the maintenance period.
Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.
2014-01-01
Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881
Integration of biological networks and gene expression data using Cytoscape
Cline, Melissa S; Smoot, Michael; Cerami, Ethan; Kuchinsky, Allan; Landys, Nerius; Workman, Chris; Christmas, Rowan; Avila-Campilo, Iliana; Creech, Michael; Gross, Benjamin; Hanspers, Kristina; Isserlin, Ruth; Kelley, Ryan; Killcoyne, Sarah; Lotia, Samad; Maere, Steven; Morris, John; Ono, Keiichiro; Pavlovic, Vuk; Pico, Alexander R; Vailaya, Aditya; Wang, Peng-Liang; Adler, Annette; Conklin, Bruce R; Hood, Leroy; Kuiper, Martin; Sander, Chris; Schmulevich, Ilya; Schwikowski, Benno; Warner, Guy J; Ideker, Trey; Bader, Gary D
2013-01-01
Cytoscape is a free software package for visualizing, modeling and analyzing molecular and genetic interaction networks. This protocol explains how to use Cytoscape to analyze the results of mRNA expression profiling, and other functional genomics and proteomics experiments, in the context of an interaction network obtained for genes of interest. Five major steps are described: (i) obtaining a gene or protein network, (ii) displaying the network using layout algorithms, (iii) integrating with gene expression and other functional attributes, (iv) identifying putative complexes and functional modules and (v) identifying enriched Gene Ontology annotations in the network. These steps provide a broad sample of the types of analyses performed by Cytoscape. PMID:17947979
Air-coupled laser vibrometry: analysis and applications.
Solodov, Igor; Döring, Daniel; Busse, Gerd
2009-03-01
Acousto-optic interaction between a narrow laser beam and acoustic waves in air is analyzed theoretically. The photoelastic relation in air is used to derive the phase modulation of laser light in air-coupled reflection vibrometry induced by angular spatial spectral components comprising the acoustic beam. Maximum interaction was found for the zero spatial acoustic component propagating normal to the laser beam. The angular dependence of the imaging efficiency is determined for the axial and nonaxial acoustic components with the regard for the laser beam steering in the scanning mode. The sensitivity of air-coupled vibrometry is compared with conventional "Doppler" reflection vibrometry. Applications of the methodology for visualization of linear and nonlinear air-coupled fields are demonstrated.
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Odours reduce the magnitude of object substitution masking for matching visual targets in females.
Robinson, Amanda K; Laning, Julia; Reinhard, Judith; Mattingley, Jason B
2016-08-01
Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.
Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie
2017-01-01
Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/
Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception
Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.
2017-01-01
Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023
Liscum, E; Stowe-Evans, E L
2000-09-01
Phototropism is the process by which plants reorient growth of various organs, most notably stems, in response to lateral differences in light quantity and/or quality. The ubiquitous nature of the phototropic response in the plant kingdom implies that it provides some adaptive evolutionary advantage. Upon visual inspection it is tempting to surmise that phototropic curvatures result from a relatively simple growth response to a directional stimulus. However, detailed photophysiological, and more recently genetic and molecular, studies have demonstrated that phototropism is in fact regulated by complex interactions among several photosensory systems. At least two receptors, phototropin and a presently unidentified receptor, appear to mediate the primary photoreception of directional blue light cues in dark-grown plants. PhyB may also function as a primary receptor to detect lateral increases in far-red light in neighbor-avoidance responses of light-grown plants. Phytochromes (phyA and phyB at a minimum) also appear to function as secondary receptors to regulate adaptation processes that ultimately modulate the magnitude of curvature induced by primary photoperception. As a result of the interactions of these multiple photosensory systems plants are able to maximize the adaptive advantage of the phototropic response in ever changing light environments.
Rouger, Vincent; Bordet, Guillaume; Couillault, Carole; Monneret, Serge; Mailfert, Sébastien; Ewbank, Jonathan J; Pujol, Nathalie; Marguet, Didier
2014-05-20
To investigate the early stages of cell-cell interactions occurring between living biological samples, imaging methods with appropriate spatiotemporal resolution are required. Among the techniques currently available, those based on optical trapping are promising. Methods to image trapped objects, however, in general suffer from a lack of three-dimensional resolution, due to technical constraints. Here, we have developed an original setup comprising two independent modules: holographic optical tweezers, which offer a versatile and precise way to move multiple objects simultaneously but independently, and a confocal microscope that provides fast three-dimensional image acquisition. The optical decoupling of these two modules through the same objective gives users the possibility to easily investigate very early steps in biological interactions. We illustrate the potential of this setup with an analysis of infection by the fungus Drechmeria coniospora of different developmental stages of Caenorhabditis elegans. This has allowed us to identify specific areas on the nematode's surface where fungal spores adhere preferentially. We also quantified this adhesion process for different mutant nematode strains, and thereby derive insights into the host factors that mediate fungal spore adhesion. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
Kraehenmann, Rainer; Schmidt, André; Friston, Karl; Preller, Katrin H; Seifritz, Erich; Vollenweider, Franz X
2016-01-01
Stimulation of serotonergic neurotransmission by psilocybin has been shown to shift emotional biases away from negative towards positive stimuli. We have recently shown that reduced amygdala activity during threat processing might underlie psilocybin's effect on emotional processing. However, it is still not known whether psilocybin modulates bottom-up or top-down connectivity within the visual-limbic-prefrontal network underlying threat processing. We therefore analyzed our previous fMRI data using dynamic causal modeling and used Bayesian model selection to infer how psilocybin modulated effective connectivity within the visual-limbic-prefrontal network during threat processing. First, both placebo and psilocybin data were best explained by a model in which threat affect modulated bidirectional connections between the primary visual cortex, amygdala, and lateral prefrontal cortex. Second, psilocybin decreased the threat-induced modulation of top-down connectivity from the amygdala to primary visual cortex, speaking to a neural mechanism that might underlie putative shifts towards positive affect states after psilocybin administration. These findings may have important implications for the treatment of mood and anxiety disorders.
Asymmetric top-down modulation of ascending visual pathways in pigeons.
Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur
2016-03-01
Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.
The Use of Uas for Rapid 3d Mapping in Geomatics Education
NASA Astrophysics Data System (ADS)
Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan
2016-06-01
With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Tools and procedures for visualization of proteins and other biomolecules.
Pan, Lurong; Aller, Stephen G
2015-04-01
Protein, peptides, and nucleic acids are biomolecules that drive biological processes in living organisms. An enormous amount of structural data for a large number of these biomolecules has been described with atomic precision in the form of structural "snapshots" that are freely available in public repositories. These snapshots can help explain how the biomolecules function, the nature of interactions between multi-molecular complexes, and even how small-molecule drugs can modulate the biomolecules for clinical benefits. Furthermore, these structural snapshots serve as inputs for sophisticated computer simulations to turn the biomolecules into moving, "breathing" molecular machines for understanding their dynamic properties in real-time computer simulations. In order for the researcher to take advantage of such a wealth of structural data, it is necessary to gain competency in the use of computer molecular visualization tools for exploring the structures and visualizing three-dimensional spatial representations. Here, we present protocols for using two common visualization tools--the Web-based Jmol and the stand-alone PyMOL package--as well as a few examples of other popular tools. Copyright © 2015 John Wiley & Sons, Inc.
Humphreys, Glyn W
2016-10-01
The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.
Running VisIt Software on the Peregrine System | High-Performance Computing
kilobyte range. VisIt features a robust remote visualization capability. VisIt can be started on a local machine and used to visualize data on a remote compute cluster.The remote machine must be able to send VisIt module must be loaded as part of this process. To enable remote visualization the 'module load
Stajdohar, Miha; Rosengarten, Rafael D; Kokosar, Janez; Jeran, Luka; Blenkus, Domen; Shaulsky, Gad; Zupan, Blaz
2017-06-02
Dictyostelium discoideum, a soil-dwelling social amoeba, is a model for the study of numerous biological processes. Research in the field has benefited mightily from the adoption of next-generation sequencing for genomics and transcriptomics. Dictyostelium biologists now face the widespread challenges of analyzing and exploring high dimensional data sets to generate hypotheses and discovering novel insights. We present dictyExpress (2.0), a web application designed for exploratory analysis of gene expression data, as well as data from related experiments such as Chromatin Immunoprecipitation sequencing (ChIP-Seq). The application features visualization modules that include time course expression profiles, clustering, gene ontology enrichment analysis, differential expression analysis and comparison of experiments. All visualizations are interactive and interconnected, such that the selection of genes in one module propagates instantly to visualizations in other modules. dictyExpress currently stores the data from over 800 Dictyostelium experiments and is embedded within a general-purpose software framework for management of next-generation sequencing data. dictyExpress allows users to explore their data in a broader context by reciprocal linking with dictyBase-a repository of Dictyostelium genomic data. In addition, we introduce a companion application called GenBoard, an intuitive graphic user interface for data management and bioinformatics analysis. dictyExpress and GenBoard enable broad adoption of next generation sequencing based inquiries by the Dictyostelium research community. Labs without the means to undertake deep sequencing projects can mine the data available to the public. The entire information flow, from raw sequence data to hypothesis testing, can be accomplished in an efficient workspace. The software framework is generalizable and represents a useful approach for any research community. To encourage more wide usage, the backend is open-source, available for extension and further development by bioinformaticians and data scientists.
On-line applications of numerical models in the Black Sea GIS
NASA Astrophysics Data System (ADS)
Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Nikolaidis, M.; Stylianou, Stavros
2017-09-01
The Black Sea Geographical Information System (GIS) is developed based on cutting edge information technologies, and provides automated data processing and visualization on-line. Mapserver is used as a mapping service; the data are stored in MySQL DBMS; PHP and Python modules are utilized for data access, processing, and exchange. New numerical models can be incorporated in the GIS environment as individual software modules, compiled for a server-based operational system, providing interaction with the GIS. A common interface allows setting the input parameters; then the model performs the calculation of the output data in specifically predefined files and format. The calculation results are then passed to the GIS for visualization. Initially, a test scenario of integration of a numerical model into the GIS was performed, using software, developed to describe a two-dimensional tsunami propagation in variable basin depth, based on a linear long surface wave model which is legitimate for more than 5 m depth. Furthermore, the well established oil spill and trajectory 3-D model MEDSLIK (http://www.oceanography.ucy.ac.cy/medslik/) was integrated into the GIS with more advanced GIS functionality and capabilities. MEDSLIK is able to forecast and hind cast the trajectories of oil pollution and floating objects, by using meteo-ocean data and the state of oil spill. The MEDSLIK module interface allows a user to enter all the necessary oil spill parameters, i.e. date and time, rate of spill or spill volume, forecasting time, coordinates, oil spill type, currents, wind, and waves, as well as the specification of the output parameters. The entered data are passed on to MEDSLIK; then the oil pollution characteristics are calculated for pre-defined time steps. The results of the forecast or hind cast are then visualized upon a map.
Simard, Isabelle; Luck, David; Mottron, Laurent; Zeffiro, Thomas A; Soulières, Isabelle
2015-01-01
Different test types lead to different intelligence estimates in autism, as illustrated by the fact that autistic individuals obtain higher scores on the Raven's Progressive Matrices (RSPM) test than they do on the Wechsler IQ, in contrast to relatively similar performance on both tests in non-autistic individuals. However, the cerebral processes underlying these differences are not well understood. This study investigated whether activity in the fluid "reasoning" network, which includes frontal, parietal, temporal and occipital regions, is differently modulated by task complexity in autistic and non-autistic individuals during the RSPM. In this purpose, we used fMRI to study autistic and non-autistic participants solving the 60 RSPM problems focussing on regions and networks involved in reasoning complexity. As complexity increased, activity in the left superior occipital gyrus and the left middle occipital gyrus increased for autistic participants, whereas non-autistic participants showed increased activity in the left middle frontal gyrus and bilateral precuneus. Using psychophysiological interaction analyses (PPI), we then verified in which regions did functional connectivity increase as a function of reasoning complexity. PPI analyses revealed greater connectivity in autistic, compared to non-autistic participants, between the left inferior occipital gyrus and areas in the left superior frontal gyrus, right superior parietal lobe, right middle occipital gyrus and right inferior temporal gyrus. We also observed generally less modulation of the reasoning network as complexity increased in autistic participants. These results suggest that autistic individuals, when confronted with increasing task complexity, rely mainly on visuospatial processes when solving more complex matrices. In addition to the now well-established enhanced activity observed in visual areas in a range of tasks, these results suggest that the enhanced reliance on visual perception has a central role in autistic cognition.
Simard, Isabelle; Luck, David; Mottron, Laurent; Zeffiro, Thomas A.; Soulières, Isabelle
2015-01-01
Different test types lead to different intelligence estimates in autism, as illustrated by the fact that autistic individuals obtain higher scores on the Raven's Progressive Matrices (RSPM) test than they do on the Wechsler IQ, in contrast to relatively similar performance on both tests in non-autistic individuals. However, the cerebral processes underlying these differences are not well understood. This study investigated whether activity in the fluid “reasoning” network, which includes frontal, parietal, temporal and occipital regions, is differently modulated by task complexity in autistic and non-autistic individuals during the RSPM. In this purpose, we used fMRI to study autistic and non-autistic participants solving the 60 RSPM problems focussing on regions and networks involved in reasoning complexity. As complexity increased, activity in the left superior occipital gyrus and the left middle occipital gyrus increased for autistic participants, whereas non-autistic participants showed increased activity in the left middle frontal gyrus and bilateral precuneus. Using psychophysiological interaction analyses (PPI), we then verified in which regions did functional connectivity increase as a function of reasoning complexity. PPI analyses revealed greater connectivity in autistic, compared to non-autistic participants, between the left inferior occipital gyrus and areas in the left superior frontal gyrus, right superior parietal lobe, right middle occipital gyrus and right inferior temporal gyrus. We also observed generally less modulation of the reasoning network as complexity increased in autistic participants. These results suggest that autistic individuals, when confronted with increasing task complexity, rely mainly on visuospatial processes when solving more complex matrices. In addition to the now well-established enhanced activity observed in visual areas in a range of tasks, these results suggest that the enhanced reliance on visual perception has a central role in autistic cognition. PMID:26594629
Nurminen, Lauri; Angelucci, Alessandra
2014-01-01
The responses of neurons in primary visual cortex (V1) to stimulation of their receptive field (RF) are modulated by stimuli in the RF surround. This modulation is suppressive when the stimuli in the RF and surround are of similar orientation, but less suppressive or facilitatory when they are cross-oriented. Similarly, in human vision surround stimuli selectively suppress the perceived contrast of a central stimulus. Although the properties of surround modulation have been thoroughly characterized in many species, cortical areas and sensory modalities, its role in perception remains unknown. Here we argue that surround modulation in V1 consists of multiple components having different spatio-temporal and tuning properties, generated by different neural circuits and serving different visual functions. One component arises from LGN afferents, is fast, untuned for orientation, and spatially restricted to the surround region nearest to the RF (the near-surround); its function is to normalize V1 cell responses to local contrast. Intra-V1 horizontal connections contribute a slower, narrowly orientation-tuned component to near-surround modulation, whose function is to increase the coding efficiency of natural images in manner that leads to the extraction of object boundaries. The third component is generated by topdown feedback connections to V1, is fast, broadly orientation-tuned, and extends into the far-surround; its function is to enhance the salience of behaviorally relevant visual features. Far- and near-surround modulation, thus, act as parallel mechanisms: the former quickly detects and guides saccades/attention to salient visual scene locations, the latter segments object boundaries in the scene. PMID:25204770
Internal curvature signal and noise in low- and high-level vision
Grabowecky, Marcia; Kim, Yee Joon; Suzuki, Satoru
2011-01-01
How does internal processing contribute to visual pattern perception? By modeling visual search performance, we estimated internal signal and noise relevant to perception of curvature, a basic feature important for encoding of three-dimensional surfaces and objects. We used isolated, sparse, crowded, and face contexts to determine how internal curvature signal and noise depended on image crowding, lateral feature interactions, and level of pattern processing. Observers reported the curvature of a briefly flashed segment, which was presented alone (without lateral interaction) or among multiple straight segments (with lateral interaction). Each segment was presented with no context (engaging low-to-intermediate-level curvature processing), embedded within a face context as the mouth (engaging high-level face processing), or embedded within an inverted-scrambled-face context as a control for crowding. Using a simple, biologically plausible model of curvature perception, we estimated internal curvature signal and noise as the mean and standard deviation, respectively, of the Gaussian-distributed population activity of local curvature-tuned channels that best simulated behavioral curvature responses. Internal noise was increased by crowding but not by face context (irrespective of lateral interactions), suggesting prevention of noise accumulation in high-level pattern processing. In contrast, internal curvature signal was unaffected by crowding but modulated by lateral interactions. Lateral interactions (with straight segments) increased curvature signal when no contextual elements were added, but equivalent interactions reduced curvature signal when each segment was presented within a face. These opposing effects of lateral interactions are consistent with the phenomena of local-feature contrast in low-level processing and global-feature averaging in high-level processing. PMID:21209356
Muscarinic and nicotinic receptors synergistically modulate working memory and attention in humans.
Ellis, Julia R; Ellis, Kathryn A; Bartholomeusz, Cali F; Harrison, Ben J; Wesnes, Keith A; Erskine, Fiona F; Vitetta, Luis; Nathan, Pradeep J
2006-04-01
Functional abnormalities in muscarinic and nicotinic receptors are associated with a number of disorders including Alzheimer's disease and schizophrenia. While the contribution of muscarinic receptors in modulating cognition is well established in humans, the effects of nicotinic receptors and the interactions and possible synergistic effects between muscarinic and nicotinic receptors have not been well characterized in humans. The current study examined the effects of selective and simultaneous muscarinic and nicotinic receptor antagonism on a range of cognitive processes. The study was a double-blind, placebo-controlled, repeated measures design in which 12 healthy, young volunteers completed cognitive testing under four acute treatment conditions: placebo (P); mecamylamine (15 mg) (M); scopolamine (0.4 mg i.m.) (S); mecamylamine (15 mg)/scopolamine (0.4 mg i.m.) (MS). Muscarinic receptor antagonism with scopolamine resulted in deficits in working memory, declarative memory, sustained visual attention and psychomotor speed. Nicotinic antagonism with mecamylamine had no effect on any of the cognitive processes examined. Simultaneous antagonism of both muscarinic and nicotinic receptors with mecamylamine and scopolamine impaired all cognitive processes impaired by scopolamine and produced greater deficits than either muscarinic or nicotinic blockade alone, particularly on working memory, visual attention and psychomotor speed. These findings suggest that muscarinic and nicotinic receptors may interact functionally to have synergistic effects particularly on working memory and attention and suggests that therapeutic strategies targeting both receptor systems may be useful in improving selective cognitive processes in a number of disorders.
The effects of tDCS upon sustained visual attention are dependent on cognitive load.
Roe, James M; Nesheim, Mathias; Mathiesen, Nina C; Moberget, Torgeir; Alnæs, Dag; Sneve, Markus H
2016-01-08
Transcranial Direct Current Stimulation (tDCS) modulates the excitability of neuronal responses and consequently can affect performance on a variety of cognitive tasks. However, the interaction between cognitive load and the effects of tDCS is currently not well-understood. We recorded the performance accuracy of participants on a bilateral multiple object tracking task while undergoing bilateral stimulation assumed to enhance (anodal) and decrease (cathodal) neuronal excitability. Stimulation was applied to the posterior parietal cortex (PPC), a region inferred to be at the centre of an attentional tracking network that shows load-dependent activation. 34 participants underwent three separate stimulation conditions across three days. Each subject received (1) left cathodal / right anodal PPC tDCS, (2) left anodal / right cathodal PPC tDCS, and (3) sham tDCS. The number of targets-to-be-tracked was also manipulated, giving a low (one target per visual field), medium (two targets per visual field) or high (three targets per visual field) tracking load condition. It was found that tracking performance at high attentional loads was significantly reduced in both stimulation conditions relative to sham, and this was apparent in both visual fields, regardless of the direction of polarity upon the brain's hemispheres. We interpret this as an interaction between cognitive load and tDCS, and suggest that tDCS may degrade attentional performance when cognitive networks become overtaxed and unable to compensate as a result. Systematically varying cognitive load may therefore be a fruitful direction to elucidate the effects of tDCS upon cognitive functions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Extraretinal induced visual sensations during IMRT of the brain.
Wilhelm-Buchstab, Timo; Buchstab, Barbara Myrthe; Leitzen, Christina; Garbe, Stephan; Müdder, Thomas; Oberste-Beulmann, Susanne; Sprinkart, Alois Martin; Simon, Birgit; Nelles, Michael; Block, Wolfgang; Schoroth, Felix; Schild, Hans Heinz; Schüller, Heinrich
2015-01-01
We observed visual sensations (VSs) in patients undergoing intensity modulated radiotherapy (IMRT) of the brain without the beam passing through ocular structures. We analyzed this phenomenon especially with regards to reproducibility, and origin. Analyzed were ten consecutive patients (aged 41-71 years) with glioblastoma multiforme who received pulsed IMRT (total dose 60Gy) with helical tomotherapy (TT). A megavolt-CT (MVCT) was performed daily before treatment. VSs were reported and recorded using a triggered event recorder. The frequency of VSs was calculated and VSs were correlated with beam direction and couch position. Subjective patient perception was plotted on an 8x8 visual field (VF) matrix. Distance to the orbital roof (OR) from the first beam causing a VS was calculated from the Dicom radiation therapy data and MVCT data. During 175 treatment sessions (average 17.5 per patient) 5959 VSs were recorded and analyzed. VSs occurred only during the treatment session not during the MVCTs. Plotting events over time revealed patient-specific patterns. The average cranio-caudad extension of VS-inducing area was 63.4mm (range 43.24-92.1mm). The maximum distance between the first VS and the OR was 56.1mm so that direct interaction with the retina is unlikely. Data on subjective visual perception showed that VSs occurred mainly in the upper right and left quadrants of the VF. Within the visual pathways the highest probability for origin of VSs was seen in the optic chiasm and the optic tract (22%). There is clear evidence that interaction of photon irradiation with neuronal structures distant from the eye can lead to VSs.
Simultaneous chromatic and luminance human electroretinogram responses
Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan
2012-01-01
The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211
Yu, Chunxiu; Sellers, Kristin K; Radtke-Schuller, Susanne; Lu, Jinghao; Xing, Lei; Ghukasyan, Vladimir; Li, Yuhui; Shih, Yen-Yu I; Murrow, Richard; Fröhlich, Flavio
2016-01-01
The role of higher-order thalamic structures in sensory processing remains poorly understood. Here, we used the ferret (Mustela putorius furo) as a novel model species for the study of the lateral posterior (LP)-pulvinar complex and its structural and functional connectivity with area 17 [primary visual cortex (V1)]. We found reciprocal anatomical connections between the lateral part of the LP nucleus of the LP-pulvinar complex (LPl) and V1. In order to investigate the role of this feedback loop between LPl and V1 in shaping network activity, we determined the functional interactions between LPl and the supragranular, granular and infragranular layers of V1 by recording multiunit activity and local field potentials. Coherence was strongest between LPl and the supragranular V1, with the most distinct peaks in the delta and alpha frequency bands. Inter-area interaction measured by spike-phase coupling identified the delta frequency band being dominated by the infragranular V1 and multiple frequency bands that were most pronounced in the supragranular V1. This inter-area coupling was differentially modulated by full-field synthetic and naturalistic visual stimulation. We also found that visual responses in LPl were distinct from those in V1 in terms of their reliability. Together, our data support a model of multiple communication channels between LPl and the layers of V1 that are enabled by oscillations in different frequency bands. This demonstration of anatomical and functional connectivity between LPl and V1 in ferrets provides a roadmap for studying the interaction dynamics during behaviour, and a template for identifying the activity dynamics of other thalamo-cortical feedback loops. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Improving the discrimination of hand motor imagery via virtual reality based visual guidance.
Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Wang, Qiong; Heng, Pheng-Ann
2016-08-01
While research on the brain-computer interface (BCI) has been active in recent years, how to get high-quality electrical brain signals to accurately recognize human intentions for reliable communication and interaction is still a challenging task. The evidence has shown that visually guided motor imagery (MI) can modulate sensorimotor electroencephalographic (EEG) rhythms in humans, but how to design and implement efficient visual guidance during MI in order to produce better event-related desynchronization (ERD) patterns is still unclear. The aim of this paper is to investigate the effect of using object-oriented movements in a virtual environment as visual guidance on the modulation of sensorimotor EEG rhythms generated by hand MI. To improve the classification accuracy on MI, we further propose an algorithm to automatically extract subject-specific optimal frequency and time bands for the discrimination of ERD patterns produced by left and right hand MI. The experimental results show that the average classification accuracy of object-directed scenarios is much better than that of non-object-directed scenarios (76.87% vs. 69.66%). The result of the t-test measuring the difference between them is statistically significant (p = 0.0207). When compared to algorithms based on fixed frequency and time bands, contralateral dominant ERD patterns can be enhanced by using the subject-specific optimal frequency and the time bands obtained by our proposed algorithm. These findings have the potential to improve the efficacy and robustness of MI-based BCI applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Schettino, Antonio; Keil, Andreas; Porcu, Emanuele; Müller, Matthias M
2016-06-01
The rapid extraction of affective cues from the visual environment is crucial for flexible behavior. Previous studies have reported emotion-dependent amplitude modulations of two event-related potential (ERP) components - the N1 and EPN - reflecting sensory gain control mechanisms in extrastriate visual areas. However, it is unclear whether both components are selective electrophysiological markers of attentional orienting toward emotional material or are also influenced by physical features of the visual stimuli. To address this question, electrical brain activity was recorded from seventeen male participants while viewing original and bright versions of neutral and erotic pictures. Bright neutral scenes were rated as more pleasant compared to their original counterpart, whereas erotic scenes were judged more positively when presented in their original version. Classical and mass univariate ERP analysis showed larger N1 amplitude for original relative to bright erotic pictures, with no differences for original and bright neutral scenes. Conversely, the EPN was only modulated by picture content and not by brightness, substantiating the idea that this component is a unique electrophysiological marker of attention allocation toward emotional material. Complementary topographic analysis revealed the early selective expression of a centro-parietal positivity following the presentation of original erotic scenes only, reflecting the recruitment of neural networks associated with sustained attention and facilitated memory encoding for motivationally relevant material. Overall, these results indicate that neural networks subtending the extraction of emotional information are differentially recruited depending on low-level perceptual features, which ultimately influence affective evaluations. Copyright © 2016 Elsevier Inc. All rights reserved.
Modulation of high-frequency vestibuloocular reflex during visual tracking in humans
NASA Technical Reports Server (NTRS)
Das, V. E.; Leigh, R. J.; Thomas, C. W.; Averbuch-Heller, L.; Zivotofsky, A. Z.; Discenna, A. O.; Dell'Osso, L. F.
1995-01-01
1. Humans may visually track a moving object either when they are stationary or in motion. To investigate visual-vestibular interaction during both conditions, we compared horizontal smooth pursuit (SP) and active combined eye-head tracking (CEHT) of a target moving sinusoidally at 0.4 Hz in four normal subjects while the subjects were either stationary or vibrated in yaw at 2.8 Hz. We also measured the visually enhanced vestibuloocular reflex (VVOR) during vibration in yaw at 2.8 Hz over a peak head velocity range of 5-40 degrees/s. 2. We found that the gain of the VVOR at 2.8 Hz increased in all four subjects as peak head velocity increased (P < 0.001), with minimal phase changes, such that mean retinal image slip was held below 5 degrees/s. However, no corresponding modulation in vestibuloocular reflex gain occurred with increasing peak head velocity during a control condition when subjects were rotated in darkness. 3. During both horizontal SP and CEHT, tracking gains were similar, and the mean slip speed of the target's image on the retina was held below 5.5 degrees/s whether subjects were stationary or being vibrated at 2.8 Hz. During both horizontal SP and CEHT of target motion at 0.4 Hz, while subjects were vibrated in yaw, VVOR gain for the 2.8-Hz head rotations was similar to or higher than that achieved during fixation of a stationary target. This is in contrast to the decrease of VVOR gain that is reported while stationary subjects perform CEHT.(ABSTRACT TRUNCATED AT 250 WORDS).
Visual search, visual streams, and visual architectures.
Green, M
1991-10-01
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.
Vitality Forms Expressed by Others Modulate Our Own Motor Response: A Kinematic Study
Di Cesare, Giuseppe; De Stefani, Elisa; Gentilucci, Maurizio; De Marco, Doriana
2017-01-01
During social interaction, actions, and words may be expressed in different ways, for example, gently or rudely. A handshake can be gentle or vigorous and, similarly, tone of voice can be pleasant or rude. These aspects of social communication have been named vitality forms by Daniel Stern. Vitality forms represent how an action is performed and characterize all human interactions. In spite of their importance in social life, to date it is not clear whether the vitality forms expressed by the agent can influence the execution of a subsequent action performed by the receiver. To shed light on this matter, in the present study we carried out a kinematic study aiming to assess whether and how visual and auditory properties of vitality forms expressed by others influenced the motor response of participants. In particular, participants were presented with video-clips showing a male and a female actor performing a “giving request” (give me) or a “taking request” (take it) in visual, auditory, and mixed modalities (visual and auditory). Most importantly, requests were expressed with rude or gentle vitality forms. After the actor's request, participants performed a subsequent action. Results showed that vitality forms expressed by the actors influenced the kinematic parameters of the participants' actions regardless to the modality by which they are conveyed. PMID:29204114
Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders
Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole
2015-01-01
Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael
2013-01-16
One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.
Visual motion modulates pattern sensitivity ahead, behind, and beside motion
Arnold, Derek H.; Marinovic, Welber; Whitney, David
2014-01-01
Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another – in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input – a ‘predictive summation’. Another possible explanation is a phase sensitive ‘spatial summation’, a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position – it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation. PMID:24699250
NASA Astrophysics Data System (ADS)
Varma, Keisha; Linn, Marcia C.
2012-08-01
In this work, we examine middle school students' understanding of the greenhouse effect and global warming. We designed and refined a technology-enhanced curriculum module called Global Warming: Virtual Earth. In the module activities, students conduct virtual experiments with a visualization of the greenhouse effect. They analyze data and draw conclusions about how individual variables effect changes in the Earth's temperature. They also carry out inquiry activities to make connections between scientific processes, the socio-scientific issues, and ideas presented in the media. Results show that participating in the unit increases students' understanding of the science. We discuss how students integrate their ideas about global climate change as a result of using virtual experiments that allow them to explore meaningful complexities of the climate system.
The modulation of delta responses in the interaction of brightness and emotion.
Kurt, Pınar; Eroğlu, Kübra; Bayram Kuzgun, Tubanur; Güntekin, Bahar
2017-02-01
The modulation of delta oscillations (0.5-3.5Hz) by emotional stimuli is reported. Physical attributes such as color, brightness and spatial frequency of emotional visual stimuli have crucial effect on the perception of complex scene. Brightness is intimately related with emotional valence. Here we explored the effect of brightness on delta oscillatory responses upon presentation of pleasant, unpleasant and neutral pictures. We found that bright unpleasant pictures elicited lower amplitude of delta response than original unpleasant pictures. The electrophysiological finding of the study was in accordance with behavioral data. These results denoted the importance of delta responses on the examination of the association between perceptual and conceptual processes while in the question of brightness and emotion. Copyright © 2016 Elsevier B.V. All rights reserved.
When apperceptive agnosia is explained by a deficit of primary visual processing.
Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta
2014-03-01
Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Digital fabrication of multi-material biomedical objects.
Cheung, H H; Choi, S H
2009-12-01
This paper describes a multi-material virtual prototyping (MMVP) system for modelling and digital fabrication of discrete and functionally graded multi-material objects for biomedical applications. The MMVP system consists of a DMMVP module, an FGMVP module and a virtual reality (VR) simulation module. The DMMVP module is used to model discrete multi-material (DMM) objects, while the FGMVP module is for functionally graded multi-material (FGM) objects. The VR simulation module integrates these two modules to perform digital fabrication of multi-material objects, which can be subsequently visualized and analysed in a virtual environment to optimize MMLM processes for fabrication of product prototypes. Using the MMVP system, two biomedical objects, including a DMM human spine and an FGM intervertebral disc spacer are modelled and digitally fabricated for visualization and analysis in a VR environment. These studies show that the MMVP system is a practical tool for modelling, visualization, and subsequent fabrication of biomedical objects of discrete and functionally graded multi-materials for biomedical applications. The system may be adapted to control MMLM machines with appropriate hardware for physical fabrication of biomedical objects.
Perceived state of self during motion can differentially modulate numerical magnitude allocation.
Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M
2016-09-01
Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Job, Xavier E; de Fockert, Jan W; van Velzen, José
2016-08-01
Behavioural and electrophysiological evidence has demonstrated that preparation of goal-directed actions modulates sensory perception at the goal location before the action is executed. However, previous studies have focused on sensory perception in areas of peripersonal space. The present study investigated visual and tactile sensory processing at the goal location of upcoming movements towards the body, much of which is not visible, as well as visible peripersonal space. A motor task cued participants to prepare a reaching movement towards goals either in peripersonal space in front of them or personal space on the upper chest. In order to assess modulations of sensory perception during movement preparation, event-related potentials (ERPs) were recorded in response to task-irrelevant visual and tactile probe stimuli delivered randomly at one of the goal locations of the movements. In line with previous neurophysiological findings, movement preparation modulated visual processing at the goal of a movement in peripersonal space. Movement preparation also modulated somatosensory processing at the movement goal in personal space. The findings demonstrate that tactile perception in personal space is subject to similar top-down sensory modulation by motor preparation as observed for visual stimuli presented in peripersonal space. These findings show for the first time that the principles and mechanisms underlying adaptive modulation of sensory processing in the context of action extend to tactile perception in unseen personal space. Copyright © 2016 Elsevier Ltd. All rights reserved.
Top-down knowledge modulates onset capture in a feedforward manner.
Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E
2017-04-01
How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.
The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data
NASA Astrophysics Data System (ADS)
Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris
2010-05-01
Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery, maps, and data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers within a common 3D coordinate space. Data management within the OEF handles and hides the inevitable quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Heuristics are used to extract necessary metadata used to guide data and visual operations. Derived data representations are computed to better support fluid interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization makes use of 3D graphics hardware support found on today's computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.
NCC: A Multidisciplinary Design/Analysis Tool for Combustion Systems
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey; Quealy, Angela
1999-01-01
A multi-disciplinary design/analysis tool for combustion systems is critical for optimizing the low-emission, high-performance combustor design process. Based on discussions between NASA Lewis Research Center and the jet engine companies, an industry-government team was formed in early 1995 to develop the National Combustion Code (NCC), which is an integrated system of computer codes for the design and analysis of combustion systems. NCC has advanced features that address the need to meet designer's requirements such as "assured accuracy", "fast turnaround", and "acceptable cost". The NCC development team is comprised of Allison Engine Company (Allison), CFD Research Corporation (CFDRC), GE Aircraft Engines (GEAE), NASA Lewis Research Center (LeRC), and Pratt & Whitney (P&W). This development team operates under the guidance of the NCC steering committee. The "unstructured mesh" capability and "parallel computing" are fundamental features of NCC from its inception. The NCC system is composed of a set of "elements" which includes grid generator, main flow solver, turbulence module, turbulence and chemistry interaction module, chemistry module, spray module, radiation heat transfer module, data visualization module, and a post-processor for evaluating engine performance parameters. Each element may have contributions from several team members. Such a multi-source multi-element system needs to be integrated in a way that facilitates inter-module data communication, flexibility in module selection, and ease of integration.
Linking pain and the body: neural correlates of visually induced analgesia.
Longo, Matthew R; Iannetti, Gian Domenico; Mancini, Flavia; Driver, Jon; Haggard, Patrick
2012-02-22
The visual context of seeing the body can reduce the experience of acute pain, producing a multisensory analgesia. Here we investigated the neural correlates of this "visually induced analgesia" using fMRI. We induced acute pain with an infrared laser while human participants looked either at their stimulated right hand or at another object. Behavioral results confirmed the expected analgesic effect of seeing the body, while fMRI results revealed an associated reduction of laser-induced activity in ipsilateral primary somatosensory cortex (SI) and contralateral operculoinsular cortex during the visual context of seeing the body. We further identified two known cortical networks activated by sensory stimulation: (1) a set of brain areas consistently activated by painful stimuli (the so-called "pain matrix"), and (2) an extensive set of posterior brain areas activated by the visual perception of the body ("visual body network"). Connectivity analyses via psychophysiological interactions revealed that the visual context of seeing the body increased effective connectivity (i.e., functional coupling) between posterior parietal nodes of the visual body network and the purported pain matrix. Increased connectivity with these posterior parietal nodes was seen for several pain-related regions, including somatosensory area SII, anterior and posterior insula, and anterior cingulate cortex. These findings suggest that visually induced analgesia does not involve an overall reduction of the cortical response elicited by laser stimulation, but is consequent to the interplay between the brain's pain network and a posterior network for body perception, resulting in modulation of the experience of pain.
Stimulus-dependent modulation of spontaneous low-frequency oscillations in the rat visual cortex.
Huang, Liangming; Liu, Yadong; Gui, Jianjun; Li, Ming; Hu, Dewen
2014-08-06
Research on spontaneous low-frequency oscillations is important to reveal underlying regulatory mechanisms in the brain. The mechanism for the stimulus modulation of low-frequency oscillations is not known. Here, we used the intrinsic optical imaging technique to examine stimulus-modulated low-frequency oscillation signals in the rat visual cortex. The stimulation was presented monocularly as a flashing light with different frequencies and intensities. The phases of low-frequency oscillations in different regions tended to be synchronized and the rhythms typically accelerated within a 30-s period after stimulation. These phenomena were confined to visual stimuli with specific flashing frequencies (12.5-17.5 Hz) and intensities (5-10 mA). The acceleration and synchronization induced by the flashing frequency were more marked than those induced by the intensity. These results show that spontaneous low-frequency oscillations can be modulated by parameter-dependent flashing lights and indicate the potential utility of the visual stimulus paradigm in exploring the origin and function of low-frequency oscillations.
Mannion, Damien J; Donkin, Chris; Whitford, Thomas J
2017-01-01
We investigated the relationship between psychometrically-defined schizotypy and the ability to detect a visual target pattern. Target detection is typically impaired by a surrounding pattern (context) with an orientation that is parallel to the target, relative to a surrounding pattern with an orientation that is orthogonal to the target (orientation-dependent contextual modulation). Based on reports that this effect is reduced in those with schizophrenia, we hypothesised that there would be a negative relationship between the relative score on psychometrically-defined schizotypy and the relative effect of orientation-dependent contextual modulation. We measured visual contrast detection thresholds and scores on the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) from a non-clinical sample ( N = 100). Contrary to our hypothesis, we find an absence of a monotonic relationship between the relative magnitude of orientation-dependent contextual modulation of visual contrast detection and the relative score on any of the subscales of the O-LIFE. The apparent difference of this result with previous reports on those with schizophrenia suggests that orientation-dependent contextual modulation may be an informative condition in which schizophrenia and psychometrically-defined schizotypy are dissociated. However, further research is also required to clarify the strength of orientation-dependent contextual modulation in those with schizophrenia.
Oh, Min; Ahn, Jaegyoon; Yoon, Youngmi
2014-01-01
The growing number and variety of genetic network datasets increases the feasibility of understanding how drugs and diseases are associated at the molecular level. Properly selected features of the network representations of existing drug-disease associations can be used to infer novel indications of existing drugs. To find new drug-disease associations, we generated an integrative genetic network using combinations of interactions, including protein-protein interactions and gene regulatory network datasets. Within this network, network adjacencies of drug-drug and disease-disease were quantified using a scored path between target sets of them. Furthermore, the common topological module of drugs or diseases was extracted, and thereby the distance between topological drug-module and disease (or disease-module and drug) was quantified. These quantified scores were used as features for the prediction of novel drug-disease associations. Our classifiers using Random Forest, Multilayer Perceptron and C4.5 showed a high specificity and sensitivity (AUC score of 0.855, 0.828 and 0.797 respectively) in predicting novel drug indications, and displayed a better performance than other methods with limited drug and disease properties. Our predictions and current clinical trials overlap significantly across the different phases of drug development. We also identified and visualized the topological modules of predicted drug indications for certain types of cancers, and for Alzheimer’s disease. Within the network, those modules show potential pathways that illustrate the mechanisms of new drug indications, including propranolol as a potential anticancer agent and telmisartan as treatment for Alzheimer’s disease. PMID:25356910
Rationalizing Tight Ligand Binding through Cooperative Interaction Networks
2011-01-01
Small modifications of the molecular structure of a ligand sometimes cause strong gains in binding affinity to a protein target, rendering a weakly active chemical series suddenly attractive for further optimization. Our goal in this study is to better rationalize and predict the occurrence of such interaction hot-spots in receptor binding sites. To this end, we introduce two new concepts into the computational description of molecular recognition. First, we take a broader view of noncovalent interactions and describe protein–ligand binding with a comprehensive set of favorable and unfavorable contact types, including for example halogen bonding and orthogonal multipolar interactions. Second, we go beyond the commonly used pairwise additive treatment of atomic interactions and use a small world network approach to describe how interactions are modulated by their environment. This approach allows us to capture local cooperativity effects and considerably improves the performance of a newly derived empirical scoring function, ScorpionScore. More importantly, however, we demonstrate how an intuitive visualization of key intermolecular interactions, interaction networks, and binding hot-spots supports the identification and rationalization of tight ligand binding. PMID:22087588
Interactive, Automated Management of Icing Data
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.
2009-01-01
IceVal DatAssistant is software (see figure) that provides an automated, interactive solution for the management of data from research on aircraft icing. This software consists primarily of (1) a relational database component used to store ice shape and airfoil coordinates and associated data on operational and environmental test conditions and (2) a graphically oriented database access utility, used to upload, download, process, and/or display data selected by the user. The relational database component consists of a Microsoft Access 2003 database file with nine tables containing data of different types. Included in the database are the data for all publicly releasable ice tracings with complete and verifiable test conditions from experiments conducted to date in the Glenn Research Center Icing Research Tunnel. Ice shapes from computational simulations with the correspond ing conditions performed utilizing the latest version of the LEWICE ice shape prediction code are likewise included, and are linked to the equivalent experimental runs. The database access component includes ten Microsoft Visual Basic 6.0 (VB) form modules and three VB support modules. Together, these modules enable uploading, downloading, processing, and display of all data contained in the database. This component also affords the capability to perform various database maintenance functions for example, compacting the database or creating a new, fully initialized but empty database file.
Figure-ground modulation in awake primate thalamus.
Jones, Helen E; Andolina, Ian M; Shipp, Stewart D; Adams, Daniel L; Cudeiro, Javier; Salt, Thomas E; Sillito, Adam M
2015-06-02
Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process.
Figure-ground modulation in awake primate thalamus
Jones, Helen E.; Andolina, Ian M.; Shipp, Stewart D.; Adams, Daniel L.; Cudeiro, Javier; Salt, Thomas E.; Sillito, Adam M.
2015-01-01
Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. PMID:25901330
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
A unified account of tilt illusions, association fields, and contour detection based on elastica.
Keemink, Sander W; van Rossum, Mark C W
2016-09-01
As expressed in the Gestalt law of good continuation, human perception tends to associate stimuli that form smooth continuations. Contextual modulation in primary visual cortex, in the form of association fields, is believed to play an important role in this process. Yet a unified and principled account of the good continuation law on the neural level is lacking. In this study we introduce a population model of primary visual cortex. Its contextual interactions depend on the elastica curvature energy of the smoothest contour connecting oriented bars. As expected, this model leads to association fields consistent with data. However, in addition the model displays tilt-illusions for stimulus configurations with grating and single bars that closely match psychophysics. Furthermore, the model explains not only pop-out of contours amid a variety of backgrounds, but also pop-out of single targets amid a uniform background. We thus propose that elastica is a unifying principle of the visual cortical network. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291
Real-time software-based end-to-end wireless visual communications simulation platform
NASA Astrophysics Data System (ADS)
Chen, Ting-Chung; Chang, Li-Fung; Wong, Andria H.; Sun, Ming-Ting; Hsing, T. Russell
1995-04-01
Wireless channel impairments pose many challenges to real-time visual communications. In this paper, we describe a real-time software based wireless visual communications simulation platform which can be used for performance evaluation in real-time. This simulation platform consists of two personal computers serving as hosts. Major components of each PC host include a real-time programmable video code, a wireless channel simulator, and a network interface for data transport between the two hosts. The three major components are interfaced in real-time to show the interaction of various wireless channels and video coding algorithms. The programmable features in the above components allow users to do performance evaluation of user-controlled wireless channel effects without physically carrying out these experiments which are limited in scope, time-consuming, and costly. Using this simulation platform as a testbed, we have experimented with several wireless channel effects including Rayleigh fading, antenna diversity, channel filtering, symbol timing, modulation, and packet loss.
Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio
2015-02-19
Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.
An error-tuned model for sensorimotor learning
Sadeghi, Mohsen; Wolpert, Daniel M.
2017-01-01
Current models of sensorimotor control posit that motor commands are generated by combining multiple modules which may consist of internal models, motor primitives or motor synergies. The mechanisms which select modules based on task requirements and modify their output during learning are therefore critical to our understanding of sensorimotor control. Here we develop a novel modular architecture for multi-dimensional tasks in which a set of fixed primitives are each able to compensate for errors in a single direction in the task space. The contribution of the primitives to the motor output is determined by both top-down contextual information and bottom-up error information. We implement this model for a task in which subjects learn to manipulate a dynamic object whose orientation can vary. In the model, visual information regarding the context (the orientation of the object) allows the appropriate primitives to be engaged. This top-down module selection is implemented by a Gaussian function tuned for the visual orientation of the object. Second, each module's contribution adapts across trials in proportion to its ability to decrease the current kinematic error. Specifically, adaptation is implemented by cosine tuning of primitives to the current direction of the error, which we show to be theoretically optimal for reducing error. This error-tuned model makes two novel predictions. First, interference should occur between alternating dynamics only when the kinematic errors associated with each oppose one another. In contrast, dynamics which lead to orthogonal errors should not interfere. Second, kinematic errors alone should be sufficient to engage the appropriate modules, even in the absence of contextual information normally provided by vision. We confirm both these predictions experimentally and show that the model can also account for data from previous experiments. Our results suggest that two interacting processes account for module selection during sensorimotor control and learning. PMID:29253869
Comparison of vision through surface modulated and spatial light modulated multifocal optics.
Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana
2017-04-01
Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near.
Comparison of vision through surface modulated and spatial light modulated multifocal optics
Vinas, Maria; Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; Benedi-Garcia, Clara; LaVilla, Edward Anthony; Schwiegerling, Jim; Marcos, Susana
2017-01-01
Spatial-light-modulators (SLM) are increasingly used as active elements in adaptive optics (AO) systems to simulate optical corrections, in particular multifocal presbyopic corrections. In this study, we compared vision with lathe-manufactured multi-zone (2-4) multifocal, angularly and radially, segmented surfaces and through the same corrections simulated with a SLM in a custom-developed two-active-element AO visual simulator. We found that perceived visual quality measured through real manufactured surfaces and SLM-simulated phase maps corresponded highly. Optical simulations predicted differences in perceived visual quality across different designs at Far distance, but showed some discrepancies at intermediate and near. PMID:28736655
Frequency of gamma oscillations in humans is modulated by velocity of visual motion
Butorina, Anna V.; Sysoeva, Olga V.; Prokofyev, Andrey O.; Nikolaeva, Anastasia Yu.; Stroganova, Tatiana A.
2015-01-01
Gamma oscillations are generated in networks of inhibitory fast-spiking (FS) parvalbumin-positive (PV) interneurons and pyramidal cells. In animals, gamma frequency is modulated by the velocity of visual motion; the effect of velocity has not been evaluated in humans. In this work, we have studied velocity-related modulations of gamma frequency in children using MEG/EEG. We also investigated whether such modulations predict the prominence of the “spatial suppression” effect (Tadin D, Lappin JS, Gilroy LA, Blake R. Nature 424: 312-315, 2003) that is thought to depend on cortical center-surround inhibitory mechanisms. MEG/EEG was recorded in 27 normal boys aged 8–15 yr while they watched high-contrast black-and-white annular gratings drifting with velocities of 1.2, 3.6, and 6.0°/s and performed a simple detection task. The spatial suppression effect was assessed in a separate psychophysical experiment. MEG gamma oscillation frequency increased while power decreased with increasing velocity of visual motion. In EEG, the effects were less reliable. The frequencies of the velocity-specific gamma peaks were 64.9, 74.8, and 87.1 Hz for the slow, medium, and fast motions, respectively. The frequency of the gamma response elicited during slow and medium velocity of visual motion decreased with subject age, whereas the range of gamma frequency modulation by velocity increased with age. The frequency modulation range predicted spatial suppression even after controlling for the effect of age. We suggest that the modulation of the MEG gamma frequency by velocity of visual motion reflects excitability of cortical inhibitory circuits and can be used to investigate their normal and pathological development in the human brain. PMID:25925324
Research on Intelligent Synthesis Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Lobeck, William E.
2002-01-01
Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.
Research on Intelligent Synthesis Environments
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.; Loftin, R. Bowen
2002-12-01
Four research activities related to Intelligent Synthesis Environment (ISE) have been performed under this grant. The four activities are: 1) non-deterministic approaches that incorporate technologies such as intelligent software agents, visual simulations and other ISE technologies; 2) virtual labs that leverage modeling, simulation and information technologies to create an immersive, highly interactive virtual environment tailored to the needs of researchers and learners; 3) advanced learning modules that incorporate advanced instructional, user interface and intelligent agent technologies; and 4) assessment and continuous improvement of engineering team effectiveness in distributed collaborative environments.
Granatum: a graphical single-cell RNA-Seq analysis pipeline for genomics scientists.
Zhu, Xun; Wolfgruber, Thomas K; Tasato, Austin; Arisdakessian, Cédric; Garmire, David G; Garmire, Lana X
2017-12-05
Single-cell RNA sequencing (scRNA-Seq) is an increasingly popular platform to study heterogeneity at the single-cell level. Computational methods to process scRNA-Seq data are not very accessible to bench scientists as they require a significant amount of bioinformatic skills. We have developed Granatum, a web-based scRNA-Seq analysis pipeline to make analysis more broadly accessible to researchers. Without a single line of programming code, users can click through the pipeline, setting parameters and visualizing results via the interactive graphical interface. Granatum conveniently walks users through various steps of scRNA-Seq analysis. It has a comprehensive list of modules, including plate merging and batch-effect removal, outlier-sample removal, gene-expression normalization, imputation, gene filtering, cell clustering, differential gene expression analysis, pathway/ontology enrichment analysis, protein network interaction visualization, and pseudo-time cell series construction. Granatum enables broad adoption of scRNA-Seq technology by empowering bench scientists with an easy-to-use graphical interface for scRNA-Seq data analysis. The package is freely available for research use at http://garmiregroup.org/granatum/app.
New software for 3D fracture network analysis and visualization
NASA Astrophysics Data System (ADS)
Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.
2013-12-01
This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.
Spapé, M M; Harjunen, Ville; Ravaja, N
2017-03-01
Being touched is known to affect emotion, and even a casual touch can elicit positive feelings and affinity. Psychophysiological studies have recently shown that tactile primes affect visual evoked potentials to emotional stimuli, suggesting altered affective stimulus processing. As, however, these studies approached emotion from a purely unidimensional perspective, it remains unclear whether touch biases emotional evaluation or a more general feature such as salience. Here, we investigated how simple tactile primes modulate event related potentials (ERPs), facial EMG and cardiac response to pictures of facial expressions of emotion. All measures replicated known effects of emotional face processing: Disgust and fear modulated early ERPs, anger increased the cardiac orienting response, and expressions elicited emotion-congruent facial EMG activity. Tactile primes also affected these measures, but priming never interacted with the type of emotional expression. Thus, touch may additively affect general stimulus processing, but it does not bias or modulate immediate affective evaluation. Copyright © 2017. Published by Elsevier B.V.
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
Edmiston, E. Kale; McHugo, Maureen; Dukic, Mildred S.; Smith, Stephen D.; Abou-Khalil, Bassel; Eggers, Erica
2013-01-01
Emotionally arousing pictures induce increased activation of visual pathways relative to emotionally neutral images. A predominant model for the preferential processing and attention to emotional stimuli posits that the amygdala modulates sensory pathways through its projections to visual cortices. However, recent behavioral studies have found intact perceptual facilitation of emotional stimuli in individuals with amygdala damage. To determine the importance of the amygdala to modulations in visual processing, we used functional magnetic resonance imaging to examine visual cortical blood oxygenation level-dependent (BOLD) signal in response to emotionally salient and neutral images in a sample of human patients with unilateral medial temporal lobe resection that included the amygdala. Adults with right (n = 13) or left (n = 5) medial temporal lobe resections were compared with demographically matched healthy control participants (n = 16). In the control participants, both aversive and erotic images produced robust BOLD signal increases in bilateral primary and secondary visual cortices relative to neutral images. Similarly, all patients with amygdala resections showed enhanced visual cortical activations to erotic images both ipsilateral and contralateral to the lesion site. All but one of the amygdala resection patients showed similar enhancements to aversive stimuli and there were no significant group differences in visual cortex BOLD responses in patients compared with controls for either aversive or erotic images. Our results indicate that neither the right nor left amygdala is necessary for the heightened visual cortex BOLD responses observed during emotional stimulus presentation. These data challenge an amygdalo-centric model of emotional modulation and suggest that non-amygdalar processes contribute to the emotional modulation of sensory pathways. PMID:23825407
Social context and perceived agency affects empathy for pain: an event-related fMRI investigation.
Akitsuki, Yuko; Decety, Jean
2009-08-15
Studying of the impact of social context on the perception of pain in others is important for understanding the role of intentionality in interpersonal sensitivity, empathy, and implicit moral reasoning. Here we used an event-related fMRI with pain and social context (i.e., the number of individuals in the stimuli) as the two factors to investigate how different social contexts and resulting perceived agency modulate the neural response to the perception of pain in others. Twenty-six healthy participants were scanned while presented with short dynamic visual stimuli depicting painful situations accidentally caused by or intentionally caused by another individual. The main effect of perception of pain was associated with signal increase in the aMCC, insula, somatosensory cortex, SMA and PAG. Importantly, perceiving the presence of another individual led to specific hemodynamic increase in regions involved in representing social interaction and emotion regulation including the temporoparietal junction, medial prefrontal cortex, inferior frontal gyrus, and orbitofrontal cortex. Furthermore, the functional connectivity pattern between the left amygdala and other brain areas was modulated by the perceived agency. Our study demonstrates that the social context in which pain occurs modulate the brain response to other's pain. This modulation may reflect successful adaptation to potential danger present in a social interaction. Our results contribute to a better understanding of the neural mechanisms underpinning implicit moral reasoning that concern actions that can harm other people.
Interactions between target location and reward size modulate the rate of microsaccades in monkeys
Tokiyama, Stefanie; Lisberger, Stephen G.
2015-01-01
We have studied how rewards modulate the occurrence of microsaccades by manipulating the size of an expected reward and the location of the cue that sets the expectations for future reward. We found an interaction between the size of the reward and the location of the cue. When monkeys fixated on a cue that signaled the size of future reward, the frequency of microsaccades was higher if the monkey expected a large vs. a small reward. When the cue was presented at a site in the visual field that was remote from the position of fixation, reward size had the opposite effect: the frequency of microsaccades was lower when the monkey was expecting a large reward. The strength of pursuit initiation also was affected by reward size and by the presence of microsaccades just before the onset of target motion. The gain of pursuit initiation increased with reward size and decreased when microsaccades occurred just before or after the onset of target motion. The effect of the reward size on pursuit initiation was much larger than any indirect effects reward might cause through modulation of the rate of microsaccades. We found only a weak relationship between microsaccade direction and the location of the exogenous cue relative to fixation position, even in experiments where the location of the cue indicated the direction of target motion. Our results indicate that the expectation of reward is a powerful modulator of the occurrence of microsaccades, perhaps through attentional mechanisms. PMID:26311180
Multiple Learning Strategies Project. Medical Assistant. Visually Impaired. [Vol. 1.
ERIC Educational Resources Information Center
Varney, Beverly; And Others
This instructional package, one of two designed for visually impaired students, focuses on the vocational area of medical assistant. Contained in this document are twelve learning modules organized into five units: language; receptioning; asepsis; supplies and equipment maintenance; and diagnostic tests. Each module, printed in block type,…
Using a Self-Administered Visual Basic Software Tool To Teach Psychological Concepts.
ERIC Educational Resources Information Center
Strang, Harold R.; Sullivan, Amie K.; Schoeny, Zahrl G.
2002-01-01
Introduces LearningLinks, a Visual Basic software tool that allows teachers to create individualized learning modules that use constructivist and behavioral learning principles. Describes field testing of undergraduates at the University of Virginia that tested a module designed to improve understanding of the psychological concepts of…
Machiela, Mitchell J; Chanock, Stephen J
2015-11-01
Assessing linkage disequilibrium (LD) across ancestral populations is a powerful approach for investigating population-specific genetic structure as well as functionally mapping regions of disease susceptibility. Here, we present LDlink, a web-based collection of bioinformatic modules that query single nucleotide polymorphisms (SNPs) in population groups of interest to generate haplotype tables and interactive plots. Modules are designed with an emphasis on ease of use, query flexibility, and interactive visualization of results. Phase 3 haplotype data from the 1000 Genomes Project are referenced for calculating pairwise metrics of LD, searching for proxies in high LD, and enumerating all observed haplotypes. LDlink is tailored for investigators interested in mapping common and uncommon disease susceptibility loci by focusing on output linking correlated alleles and highlighting putative functional variants. LDlink is a free and publically available web tool which can be accessed at http://analysistools.nci.nih.gov/LDlink/. mitchell.machiela@nih.gov. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.
Frequency-dependent tACS modulation of BOLD signal during rhythmic visual stimulation.
Chai, Yuhui; Sheng, Jingwei; Bandettini, Peter A; Gao, Jia-Hong
2018-05-01
Transcranial alternating current stimulation (tACS) has emerged as a promising tool for modulating cortical oscillations. In previous electroencephalogram (EEG) studies, tACS has been found to modulate brain oscillatory activity in a frequency-specific manner. However, the spatial distribution and hemodynamic response for this modulation remains poorly understood. Functional magnetic resonance imaging (fMRI) has the advantage of measuring neuronal activity in regions not only below the tACS electrodes but also across the whole brain with high spatial resolution. Here, we measured fMRI signal while applying tACS to modulate rhythmic visual activity. During fMRI acquisition, tACS at different frequencies (4, 8, 16, and 32 Hz) was applied along with visual flicker stimulation at 8 and 16 Hz. We analyzed the blood-oxygen-level-dependent (BOLD) signal difference between tACS-ON vs tACS-OFF, and different frequency combinations (e.g., 4 Hz tACS, 8 Hz flicker vs 8 Hz tACS, 8 Hz flicker). We observed significant tACS modulation effects on BOLD responses when the tACS frequency matched the visual flicker frequency or the second harmonic frequency. The main effects were predominantly seen in regions that were activated by the visual task and targeted by the tACS current distribution. These findings bridge different scientific domains of tACS research and demonstrate that fMRI could localize the tACS effect on stimulus-induced brain rhythms, which could lead to a new approach for understanding the high-level cognitive process shaped by the ongoing oscillatory signal. © 2018 Wiley Periodicals, Inc.
Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.
van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole
2008-02-20
Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.
Visual sensitivity to spatially sampled modulation in human observers
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Macleod, Donald I. A.
1991-01-01
Thresholds were measured for detecting spatial luminance modulation in regular lattices of visually discrete dots. Thresholds for modulation of a lattice are generally higher than the corresponding threshold for modulation of a continuous field, and the size of the threshold elevation, which depends on the spacing of the lattice elements, can be as large as a one log unit. The largest threshold elevations are seen when the sample spacing is 12 min arc or greater. Theories based on response compression cannot explain the further observation that the threshold elevations due to spatial sampling are also dependent on modulation frequency: the greatest elevations occur with higher modulation frequencies. The idea that this is due to masking of the modulation frequency by the spatial frequencies in the sampling lattice is considered.
Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George
2013-05-31
Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Jacob, Jane; Jacobs, Christianne; Silvanto, Juha
2015-01-01
What is the role of top-down attentional modulation in consciously accessing working memory (WM) content? In influential WM models, information can exist in different states, determined by allocation of attention; placing the original memory representation in the center of focused attention gives rise to conscious access. Here we discuss various lines of evidence indicating that such attentional modulation is not sufficient for memory content to be phenomenally experienced. We propose that, in addition to attentional modulation of the memory representation, another type of top-down modulation is required: suppression of all incoming visual information, via inhibition of early visual cortex. In this view, there are three distinct memory levels, as a function of the top-down control associated with them: (1) Nonattended, nonconscious associated with no attentional modulation; (2) attended, phenomenally nonconscious memory, associated with attentional enhancement of the actual memory trace; (3) attended, phenomenally conscious memory content, associated with enhancement of the memory trace and top-down suppression of all incoming visual input.
The role of visual representations during the lexical access of spoken words
Lewis, Gwyneth; Poeppel, David
2015-01-01
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The role of visual representations during the lexical access of spoken words.
Lewis, Gwyneth; Poeppel, David
2014-07-01
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.
On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.
Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio
2005-01-01
Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.
Chlorophyll derivatives enhance invertebrate red-light and ultraviolet phototaxis.
Degl'Innocenti, Andrea; Rossi, Leonardo; Salvetti, Alessandra; Marino, Attilio; Meloni, Gabriella; Mazzolai, Barbara; Ciofani, Gianni
2017-06-13
Chlorophyll derivatives are known to enhance vision in vertebrates. They are thought to bind visual pigments (i.e., opsins apoproteins bound to retinal chromophores) directly within the retina. Consistent with previous findings in vertebrates, here we show that chlorin e 6 - a chlorophyll derivative - enhances photophobicity in a flatworm (Dugesia japonica), specifically when exposed to UV radiation (λ = 405 nm) or red light (λ = 660 nm). This is the first report of chlorophyll derivatives acting as modulators of invertebrate phototaxis, and in general the first account demonstrating that they can artificially alter animal response to light at a behavioral level. Our findings show that the interaction between chlorophyll derivatives and opsins virtually concerns the vast majority of bilaterian animals, and also occurs in visual systems based on rhabdomeric (rather than ciliary) opsins.
Normalization regulates competition for visual awareness
Ling, Sam; Blake, Randolph
2012-01-01
Summary Signals in our brain are in a constant state of competition, including those that vie for motor control, sensory dominance and awareness. To shed light on the mechanisms underlying neural competition, we exploit binocular rivalry, a phenomenon that allows us to probe the competitive process that ordinarily transpires outside of our awareness. By measuring psychometric functions under different states of rivalry, we discovered a pattern of gain changes that are consistent with a model of competition in which attention interacts with normalization processes, thereby driving the ebb and flow between states of awareness. Moreover, we reveal that attention plays a crucial role in modulating competition; without attention, rivalry suppression for high-contrast stimuli is negligible. We propose a framework whereby our visual awareness of competing sensory representations is governed by a common neural computation: normalization. PMID:22884335
Estrogen-Cholinergic Interactions: Implications for Cognitive Aging
Newhouse, Paul; Dumas, Julie
2015-01-01
While many studies in humans have investigated the effects of estrogen and hormone therapy on cognition, potential neurobiological correlates of these effects have been less well studied. An important site of action for estrogen in the brain is the cholinergic system. Several decades of research support the critical role of CNS cholinergic systems in cognition in humans, particularly in learning and memory formation and attention. In humans, the cholinergic system has been implicated in many aspects of cognition including the partitioning of attentional resources, working memory, inhibition of irrelevant information, and improved performance on effort-demanding tasks. Studies support the hypothesis that estradiol helps to maintain aspects of attention and verbal and visual memory. Such cognitive domains are exactly those modulated by cholinergic systems and extensive basic and preclinical work over the past several decades has clearly shown that basal forebrain cholinergic systems are dependent on estradiol support for adequate functioning. This paper will review recent human studies from our laboratories and others that have extended preclinical research examining estrogen-cholinergic interactions to humans. Studies examined include estradiol and cholinergic antagonist reversal studies in normal older women, examinations of the neural representations of estrogen-cholinergic interactions using functional brain imaging, and studies of the ability of selective estrogen receptor modulators such as tamoxifen to interact with cholinergic-mediated cognitive performance. We also discuss the implications of these studies for the underlying hypotheses of cholinergic-estrogen interactions and cognitive aging, and indications for prophylactic and therapeutic potential that may exploit these effects. PMID:26187712
Marini, Francesco; Tagliabue, Chiara F; Sposito, Ambra V; Hernandez-Arieta, Alejandro; Brugger, Peter; Estévez, Natalia; Maravita, Angelo
2014-01-01
The way in which humans represent their own bodies is critical in guiding their interactions with the environment. To achieve successful body-space interactions, the body representation is strictly connected with that of the space immediately surrounding it through efficient visuo-tactile crossmodal integration. Such a body-space integrated representation is not fixed, but can be dynamically modulated by the use of external tools. Our study aims to explore the effect of using a complex tool, namely a functional prosthesis, on crossmodal visuo-tactile spatial interactions in healthy participants. By using the crossmodal visuo-tactile congruency paradigm, we found that prolonged training with a mechanical hand capable of distal hand movements and providing sensory feedback induces a pattern of interference, which is not observed after a brief training, between visual stimuli close to the prosthesis and touches on the body. These results suggest that after extensive, but not short, training the functional prosthesis acquires a visuo-tactile crossmodal representation akin to real limbs. This finding adds to previous evidence for the embodiment of functional prostheses in amputees, and shows that their use may also improve the crossmodal combination of somatosensory feedback delivered by the prosthesis with visual stimuli in the space around it, thus effectively augmenting the patients' visuomotor abilities. © 2013 Published by Elsevier Ltd.
Ross, M; Lanyon, L J; Viswanathan, J; Manoach, D S; Barton, J J S
2011-11-24
Monkey studies report greater activity in the lateral intraparietal area and more efficient saccades when targets coincide with the location of prior reward cues, even when cue location does not indicate which responses will be rewarded. This suggests that reward can modulate spatial attention and visual selection independent of the "action value" of the motor response. Our goal was first to determine whether reward modulated visual selection similarly in humans, and next, to discover whether reward and penalty differed in effect, if cue effects were greater for cognitively demanding antisaccades, and if financial consequences that were contingent on stimulus location had spatially selective effects. We found that motivational cues reduced all latencies, more for reward than penalty. There was an "inhibition-of-return"-like effect at the location of the cue, but unlike the results in monkeys, cue valence did not modify this effect in prosaccades, and the inhibition-of-return effect was slightly increased rather than decreased in antisaccades. When financial consequences were contingent on target location, locations without reward or penalty consequences lost the benefits seen in noncontingent trials, whereas locations with consequences maintained their gains. We conclude that unlike monkeys, humans show reward effects not on visual selection but on the value of actions. The human saccadic system has both the capacity to enhance responses to multiple locations simultaneously, and the flexibility to focus motivational enhancement only on locations with financial consequences. Reward is more effective than penalty, and both interact with the additional attentional demands of the antisaccade task. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Perceptual grouping determines haptic contextual modulation.
Overvliet, K E; Sayim, B
2016-09-01
Since the early phenomenological demonstrations of Gestalt principles, one of the major challenges of Gestalt psychology has been to quantify these principles. Here, we show that contextual modulation, i.e. the influence of context on target perception, can be used as a tool to quantify perceptual grouping in the haptic domain, similar to the visual domain. We investigated the influence of target-flanker grouping on performance in haptic vernier offset discrimination. We hypothesized that when, despite the apparent differences between vision and haptics, similar grouping principles are operational, a similar pattern of flanker interference would be observed in the haptic as in the visual domain. Participants discriminated the offset of a haptic vernier. The vernier was flanked by different flanker configurations: no flankers, single flanking lines, 10 flanking lines, rectangles and single perpendicular lines, varying the degree to which the vernier grouped with the flankers. Additionally, we used two different flanker widths (same width as and narrower than the target), again to vary target-flanker grouping. Our results show a clear effect of flankers: performance was much better when the vernier was presented alone compared to when it was presented with flankers. In the majority of flanker configurations, grouping between the target and the flankers determined the strength of interference, similar to the visual domain. However, in the same width rectangular flanker condition we found aberrant results. We discuss the results of our study in light of similarities and differences between vision and haptics and the interaction between different grouping principles. We conclude that in haptics, similar organization principles apply as in visual perception and argue that grouping and Gestalt are key organization principles not only of vision, but of the perceptual system in general. Copyright © 2015 Elsevier Ltd. All rights reserved.
Electrophysiological evidence for attentional guidance by the contents of working memory.
Kumar, Sanjay; Soto, David; Humphreys, Glyn W
2009-07-01
The deployment of visual attention can be strongly modulated by stimuli matching the contents of working memory (WM), even when WM contents are detrimental to performance and salient bottom-up cues define the critical target [D. Soto et al. (2006)Vision Research, 46, 1010-1018]. Here we investigated the electrophysiological correlates of this early guidance of attention by WM in humans. Observers were presented with a prime to either identify or hold in memory. Subsequently, they had to search for a target line amongst different distractor lines. Each line was embedded within one of four objects and one of the distractor objects could match the stimulus held in WM. Behavioural data showed that performance was more strongly affected by the prime when it was held in memory than when it was merely identified. An electrophysiological measure of the efficiency of target selection (the N2pc) was also affected by the match between the item in WM and the location of the target in the search task. The N2pc was enhanced when the target fell in the same visual field as the re-presented (invalid) prime, compared with when the prime did not reappear in the search display (on neutral trials) and when the prime was contralateral to the target. Merely identifying the prime produced no effect on the N2pc component. The evidence suggests that WM modulates competitive interactions between the items in the visual field to determine the efficiency of target selection.
NASA Astrophysics Data System (ADS)
Kassin, A.; Cody, R. P.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Score, R.; Escarzaga, S. M.; Tweedie, C. E.
2016-12-01
The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information, including links to data where possible. The latest ARMAP iteration has i) reworked the search user interface (UI) to enable multiple filters to be applied in user-driven queries and ii) implemented ArcGIS Javascript API 4.0 to allow for deployment of 3D maps directly into a users web-browser and enhanced customization of popups. Module additions include i) a dashboard UI powered by a back-end Apache SOLR engine to visualize data in intuitive and interactive charts; and ii) a printing module that allows users to customize maps and export these to different formats (pdf, ppt, gif and jpg). New reference layers and an updated ship tracks layer have also been added. These improvements have been made to improve discoverability, enhance logistics coordination, identify geographic gaps in research/observation effort, and foster enhanced collaboration among the research community. Additionally, ARMAP can be used to demonstrate past, present, and future research effort supported by the U.S. Government.
The effects of visual search efficiency on object-based attention
Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene
2017-01-01
The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192
Walking modulates speed sensitivity in Drosophila motion vision.
Chiappe, M Eugenia; Seelig, Johannes D; Reiser, Michael B; Jayaraman, Vivek
2010-08-24
Changes in behavioral state modify neural activity in many systems. In some vertebrates such modulation has been observed and interpreted in the context of attention and sensorimotor coordinate transformations. Here we report state-dependent activity modulations during walking in a visual-motor pathway of Drosophila. We used two-photon imaging to monitor intracellular calcium activity in motion-sensitive lobula plate tangential cells (LPTCs) in head-fixed Drosophila walking on an air-supported ball. Cells of the horizontal system (HS)--a subgroup of LPTCs--showed stronger calcium transients in response to visual motion when flies were walking rather than resting. The amplified responses were also correlated with walking speed. Moreover, HS neurons showed a relatively higher gain in response strength at higher temporal frequencies, and their optimum temporal frequency was shifted toward higher motion speeds. Walking-dependent modulation of HS neurons in the Drosophila visual system may constitute a mechanism to facilitate processing of higher image speeds in behavioral contexts where these speeds of visual motion are relevant for course stabilization. Copyright 2010 Elsevier Ltd. All rights reserved.
Bellucci, Arianna; Navarria, Laura; Falarti, Elisa; Zaltieri, Michela; Bono, Federica; Collo, Ginetta; Grazia, Maria; Missale, Cristina; Spano, PierFranco
2011-01-01
Alpha-synuclein, the major component of Lewy bodies, is thought to play a central role in the onset of synaptic dysfunctions in Parkinson's disease (PD). In particular, α-synuclein may affect dopaminergic neuron function as it interacts with a key protein modulating dopamine (DA) content at the synapse: the DA transporter (DAT). Indeed, recent evidence from our “in vitro” studies showed that α-synuclein aggregation decreases the expression and membrane trafficking of the DAT as the DAT is retained into α-synuclein-immunopositive inclusions. This notwithstanding, “in vivo” studies on PD animal models investigating whether DAT distribution is altered by the pathological overexpression and aggregation of α-synuclein are missing. By using the proximity ligation assay, a technique which allows the “in situ” visualization of protein-protein interactions, we studied the occurrence of alterations in the distribution of DAT/α-synuclein complexes in the SYN120 transgenic mouse model, showing insoluble α-synuclein aggregates into dopaminergic neurons of the nigrostriatal system, reduced striatal DA levels and an altered distribution of synaptic proteins in the striatum. We found that DAT/α-synuclein complexes were markedly redistributed in the striatum and substantia nigra of SYN120 mice. These alterations were accompanied by a significant increase of DAT striatal levels in transgenic animals when compared to wild type littermates. Our data indicate that, in the early pathogenesis of PD, α-synuclein acts as a fine modulator of the dopaminergic synapse by regulating the subcellular distribution of key proteins such as the DAT. PMID:22163275
Huang, Shun-Ping; Brown, Bruce M.; Craft, Cheryl M.
2010-01-01
In the G-protein coupled receptor (GPCR) phototransduction cascade, visual Arrestin1 (Arr1) binds to and deactivates phosphorylated light-activated opsins, a process that is critical for effective recovery and normal vision. In this report, we discovered a novel synaptic interaction between Arr1 and N-ethylmaleimide sensitive factor (NSF) that is enhanced in a dark environment when mouse photoreceptors are depolarized and the rate of exocytosis is elevated. In the photoreceptor synapse, NSF functions to sustain a higher rate of exocytosis, in addition to the compensatory endocytosis to retrieve and to recycle vesicle membrane and synaptic proteins. Not only does Arr1 bind to the junction of NSF N-terminal and its first ATPase domains in an ATP-dependent manner in vitro, but Arr1 also enhances both NSF ATPase and NSF disassembly activities. In vivo experiments in mouse retinas with the Arr1 gene knocked out, the expression levels of NSF and other synapse-enriched components, including vesicular glutamate transporter 1 (vGLUT1), excitatory amino acid transporter 5 (EAAT5), and vesicle associated membrane protein 2 (VAMP2), are markedly reduced, which lead to a substantial decrease in the exocytosis rate with FM1-43. Thus, we propose that the Arr1 and NSF interaction is important for modulating normal synaptic function in mouse photoreceptors. This study demonstrates a vital alternative function for Arr1 in the photoreceptor synapse and provides key insights into the potential molecular mechanisms of inherited retinal diseases, such as Oguchi disease and Arr1-associated retinitis pigmentosa. PMID:20631167
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pediatric trainees' engagement in the online nutrition curriculum: preliminary results.
Lewis, Kadriye O; Frank, Graeme R; Nagel, Rollin; Turner, Teri L; Ferrell, Cynthia L; Sangvai, Shilpa G; Donthi, Rajesh; Mahan, John D
2014-09-16
The Pediatric Nutrition Series (PNS) consists of ten online, interactive modules and supplementary educational materials that have utilized web-based multimedia technologies to offer nutrition education for pediatric trainees and practicing physicians. The purpose of the study was to evaluate pediatric trainees' engagement, knowledge acquisition, and satisfaction with nutrition modules delivered online in interactive and non-interactive formats. From December 2010 through August 2011, pediatric trainees from seventy-three (73) different U.S. programs completed online nutrition modules designed to develop residents' knowledge of counseling around and management of nutritional issues in children. Data were analyzed using SPSS version 19. Both descriptive and inferential statistics were used in comparing interactive versus non-interactive modules. Pretest/posttest and module evaluations measured knowledge acquisition and satisfaction. Three hundred and twenty-two (322) pediatric trainees completed one or more of six modules for a total of four hundred and forty-two (442) accessions. All trainees who completed at least one module were included in the study. Two-way analyses of variance (ANOVA) with repeated measures (pre/posttest by interactive/non-interactive format) indicated significant knowledge gains from pretest to posttest (p < 0.002 for all six modules). Comparisons between interactive and non-interactive formats for Module 1 (N = 85 interactive, N = 95 non-interactive) and Module 5 (N = 5 interactive, N = 16 non-interactive) indicated a parallel improvement from the pretest to posttest, with the interactive format significantly higher than the non-interactive modules (p < .05). Both qualitative and quantitative data from module evaluations demonstrated that satisfaction with modules was high. However, there were lower ratings for whether learning objectives were met with Module 6 (p < 0.03) and lecturer rating (p < 0.004) compared to Module 1. Qualitative data also showed that completion of the interactive modules resulted in higher resident satisfaction. This initial assessment of the PNS modules shows that technology-mediated delivery of a nutrition curriculum in residency programs has great potential for providing rich learning environments for trainees while maintaining a high level of participant satisfaction.
Cognitive load effects on early visual perceptual processing.
Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia
2018-05-01
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.
An evaluation-guided approach for effective data visualization on tablets
NASA Astrophysics Data System (ADS)
Games, Peter S.; Joshi, Alark
2015-01-01
There is a rising trend of data analysis and visualization tasks being performed on a tablet device. Apps with interactive data visualization capabilities are available for a wide variety of domains. We investigate whether users grasp how to effectively interpret and interact with visualizations. We conducted a detailed user evaluation to study the abilities of individuals with respect to analyzing data on a tablet through an interactive visualization app. Based upon the results of the user evaluation, we find that most subjects performed well at understanding and interacting with simple visualizations, specifically tables and line charts. A majority of the subjects struggled with identifying interactive widgets, recognizing interactive widgets with overloaded functionality, and understanding visualizations which do not display data for sorted attributes. Based on our study, we identify guidelines for designers and developers of mobile data visualization apps that include recommendations for effective data representation and interaction.
Integrating visualization and interaction research to improve scientific workflows.
Keefe, Daniel F
2010-01-01
Scientific-visualization research is, nearly by necessity, interdisciplinary. In addition to their collaborators in application domains (for example, cell biology), researchers regularly build on close ties with disciplines related to visualization, such as graphics, human-computer interaction, and cognitive science. One of these ties is the connection between visualization and interaction research. This isn't a new direction for scientific visualization (see the "Early Connections" sidebar). However, momentum recently seems to be increasing toward integrating visualization research (for example, effective visual presentation of data) with interaction research (for example, innovative interactive techniques that facilitate manipulating and exploring data). We see evidence of this trend in several places, including the visualization literature and conferences.
Interactive QR code beautification with full background image embedding
NASA Astrophysics Data System (ADS)
Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo
2017-06-01
QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.
Prefrontal cortical regulation of brainwide circuit dynamics and reward-related behavior
Grosenick, Logan; Warden, Melissa R.; Amatya, Debha; Katovich, Kiefer; Mehta, Hershel; Patenaude, Brian; Ramakrishnan, Charu; Kalanithi, Paul; Etkin, Amit; Knutson, Brian; Glover, Gary H.; Deisseroth, Karl
2016-01-01
Motivation for reward drives adaptive behaviors, whereas impairment of reward perception and experience (anhedonia) can contribute to psychiatric diseases, including depression and schizophrenia. We sought to test the hypothesis that the medial prefrontal cortex (mPFC) controls interactions among specific subcortical regions that govern hedonic responses. By using optogenetic functional magnetic resonance imaging to locally manipulate but globally visualize neural activity in rats, we found that dopamine neuron stimulation drives striatal activity, whereas locally increased mPFC excitability reduces this striatal response and inhibits the behavioral drive for dopaminergic stimulation. This chronic mPFC overactivity also stably suppresses natural reward-motivated behaviors and induces specific new brainwide functional interactions, which predict the degree of anhedonia in individuals. These findings describe a mechanism by which mPFC modulates expression of reward-seeking behavior, by regulating the dynamical interactions between specific distant subcortical regions. PMID:26722001
P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.
Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D
2017-11-01
P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.
A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.
Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter
2016-01-01
Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.
Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT
Kim, HyungGoo R.; Angelaki, Dora E.
2017-01-01
Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP. SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons in area MT in a directionally specific manner to generate selectivity for depth sign from motion parallax. We also show, through simulations, that depth could be estimated from a population of such gain-modulated neurons. PMID:28739582
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study
NASA Astrophysics Data System (ADS)
Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.
2015-08-01
Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.
Features and functions of nonlinear spatial integration by retinal ganglion cells.
Gollisch, Tim
2013-11-01
Ganglion cells in the vertebrate retina integrate visual information over their receptive fields. They do so by pooling presynaptic excitatory inputs from typically many bipolar cells, which themselves collect inputs from several photoreceptors. In addition, inhibitory interactions mediated by horizontal cells and amacrine cells modulate the structure of the receptive field. In many models, this spatial integration is assumed to occur in a linear fashion. Yet, it has long been known that spatial integration by retinal ganglion cells also incurs nonlinear phenomena. Moreover, several recent examples have shown that nonlinear spatial integration is tightly connected to specific visual functions performed by different types of retinal ganglion cells. This work discusses these advances in understanding the role of nonlinear spatial integration and reviews recent efforts to quantitatively study the nature and mechanisms underlying spatial nonlinearities. These new insights point towards a critical role of nonlinearities within ganglion cell receptive fields for capturing responses of the cells to natural and behaviorally relevant visual stimuli. In the long run, nonlinear phenomena of spatial integration may also prove important for implementing the actual neural code of retinal neurons when designing visual prostheses for the eye. Copyright © 2012 Elsevier Ltd. All rights reserved.
The impact of visual gaze direction on auditory object tracking.
Pomper, Ulrich; Chait, Maria
2017-07-05
Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.
Depth reversals in stereoscopic displays driven by apparent size
NASA Astrophysics Data System (ADS)
Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.
1998-04-01
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.
Visual contribution to the multistable perception of speech.
Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc
2007-11-01
The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Prestimulus neural oscillations inhibit visual perception via modulation of response gain.
Chaumon, Maximilien; Busch, Niko A
2014-11-01
The ongoing state of the brain radically affects how it processes sensory information. How does this ongoing brain activity interact with the processing of external stimuli? Spontaneous oscillations in the alpha range are thought to inhibit sensory processing, but little is known about the psychophysical mechanisms of this inhibition. We recorded ongoing brain activity with EEG while human observers performed a visual detection task with stimuli of different contrast intensities. To move beyond qualitative description, we formally compared psychometric functions obtained under different levels of ongoing alpha power and evaluated the inhibitory effect of ongoing alpha oscillations in terms of contrast or response gain models. This procedure opens the way to understanding the actual functional mechanisms by which ongoing brain activity affects visual performance. We found that strong prestimulus occipital alpha oscillations-but not more anterior mu oscillations-reduce performance most strongly for stimuli of the highest intensities tested. This inhibitory effect is best explained by a divisive reduction of response gain. Ongoing occipital alpha oscillations thus reflect changes in the visual system's input/output transformation that are independent of the sensory input to the system. They selectively scale the system's response, rather than change its sensitivity to sensory information.
Interactions between visual working memory representations.
Bae, Gi-Yeul; Luck, Steven J
2017-11-01
We investigated whether the representations of different objects are maintained independently in working memory or interact with each other. Observers were shown two sequentially presented orientations and required to reproduce each orientation after a delay. The sequential presentation minimized perceptual interactions so that we could isolate interactions between memory representations per se. We found that similar orientations were repelled from each other whereas dissimilar orientations were attracted to each other. In addition, when one of the items was given greater attentional priority by means of a cue, the representation of the high-priority item was not influenced very much by the orientation of the low-priority item, but the representation of the low-priority item was strongly influenced by the orientation of the high-priority item. This indicates that attention modulates the interactions between working memory representations. In addition, errors in the reported orientations of the two objects were positively correlated under some conditions, suggesting that representations of distinct objects may become grouped together in memory. Together, these results demonstrate that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.
Development of the updated system of city underground pipelines based on Visual Studio
NASA Astrophysics Data System (ADS)
Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong
2009-10-01
Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.
Frequency modulation of neural oscillations according to visual task demands.
Wutz, Andreas; Melcher, David; Samaha, Jason
2018-02-06
Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.
Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence
Wykowska, Agnieszka; Schubö, Anna
2012-01-01
In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms. PMID:23060841
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579
Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants
Kopp, Franziska; Dietrich, Claudia
2013-01-01
Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071
A theta rhythm in macaque visual cortex and its attentional modulation
Spyropoulos, Georgios; Fries, Pascal
2018-01-01
Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632
Difference in Visual Processing Assessed by Eye Vergence Movements
Solé Puig, Maria; Puigcerver, Laura; Aznar-Casanova, J. Antonio; Supèr, Hans
2013-01-01
Orienting visual attention is closely linked to the oculomotor system. For example, a shift of attention is usually followed by a saccadic eye movement and can be revealed by micro saccades. Recently we reported a novel role of another type of eye movement, namely eye vergence, in orienting visual attention. Shifts in visuospatial attention are characterized by the response modulation to a selected target. However, unlike (micro-) saccades, eye vergence movements do not carry spatial information (except for depth) and are thus not specific to a particular visual location. To further understand the role of eye vergence in visual attention, we tested subjects with different perceptual styles. Perceptual style refers to the characteristic way individuals perceive environmental stimuli, and is characterized by a spatial difference (local vs. global) in perceptual processing. We tested field independent (local; FI) and field dependent (global; FD) observers in a cue/no-cue task and a matching task. We found that FI observers responded faster and had stronger modulation in eye vergence in both tasks than FD subjects. The results may suggest that eye vergence modulation may relate to the trade-off between the size of spatial region covered by attention and the processing efficiency of sensory information. Alternatively, vergence modulation may have a role in the switch in cortical state to prepare the visual system for new incoming sensory information. In conclusion, vergence eye movements may be added to the growing list of functions of fixational eye movements in visual perception. However, further studies are needed to elucidate its role. PMID:24069140
Cornil, C A; Dalla, C; Papadopoulou-Daifoti, Z; Baillien, M; Dejace, C; Ball, G F; Balthazart, J
2005-09-01
In Japanese quail, as in rats, the expression of male sexual behavior over relatively long time periods (days to weeks) is dependent on the local production of estradiol in the preoptic area via the aromatization of testosterone. On a short-term basis (minutes to hours), central actions of dopamine as well as locally produced estrogens modulate behavioral expression. In rats, a view of and sexual interaction with a female increase dopamine release in the preoptic area. In quail, in vitro brain aromatase activity (AA) is rapidly modulated by calcium-dependent phosphorylations that are likely to occur in vivo as a result of changes in neurotransmitter activity. Furthermore, an acute estradiol injection rapidly stimulates copulation in quail, whereas a single injection of the aromatase inhibitor vorozole rapidly inhibits this behavior. We hypothesized that brain aromatase and dopaminergic activities are regulated in quail in association with the expression of male sexual behavior. Visual access as well as sexual interactions with a female produced a significant decrease in brain AA, which was maximal after 5 min. This expression of sexual behavior also resulted in a significant decrease in dopaminergic as well as serotonergic activity after 1 min, which returned to basal levels after 5 min. These results demonstrate for the first time that AA is rapidly modulated in vivo in parallel with changes in dopamine activity. Sexual interactions with the female decreased aromatase and dopamine activities. These data challenge established views about the causal relationships among dopamine, estrogen action, and male sexual behavior.
Cornil, C. A.; Dalla, C.; Papadopoulou-Daifoti, Z.; Baillien, M.; Dejace, C.; Ball, G.F.; Balthazart, J.
2014-01-01
In Japanese quail as in rats, the expression of male sexual behavior over relatively long time periods (days to weeks) is dependent on the local production of estradiol in the preoptic area via the aromatization of testosterone. On a short-term basis (minutes to hours), central actions of dopamine as well as locally produced estrogens modulate behavioral expression. In rats, a view of and sexual interaction with a female increases dopamine release in the preoptic area. In quail, in vitro brain aromatase activity is rapidly modulated by calcium-dependent phosphorylations that are likely to occur in vivo as a result of changes in neurotransmitter activity. Furthermore, an acute estradiol injection rapidly stimulates copulation in quail, while a single injection of the aromatase inhibitor Vorozole™ rapidly inhibits this behavior. We hypothesized that brain aromatase and dopaminergic activities are regulated in quail in association with the expression of male sexual behavior. Visual access as well as sexual interactions with a female produced a significant decrease in brain aromatase activity that was maximal after 5 min. This expression of sexual behavior also resulted in a significant decrease in dopaminergic as well as serotonergic activity after 1 min, which returned to basal levels after 5 min. These results demonstrate for the first time that aromatase activity is rapidly modulated in vivo in parallel with changes in dopamine activity. Sexual interactions with the female decreased aromatase and dopamine activity. These data challenges established views about the causal relationships among dopamine, estrogen action and male sexual behavior. PMID:15932925
Figure-ground processing during fixational saccades in V1: indication for higher-order stability.
Gilad, Ariel; Pesoa, Yair; Ayzenshtat, Inbal; Slovin, Hamutal
2014-02-26
In a typical visual scene we continuously perceive a "figure" that is segregated from the surrounding "background" despite ongoing microsaccades and small saccades that are performed when attempting fixation (fixational saccades [FSs]). Previously reported neuronal correlates of figure-ground (FG) segregation in the primary visual cortex (V1) showed enhanced activity in the "figure" along with suppressed activity in the noisy "background." However, it is unknown how this FG modulation in V1 is affected by FSs. To investigate this question, we trained two monkeys to detect a contour embedded in a noisy background while simultaneously imaging V1 using voltage-sensitive dyes. During stimulus presentation, the monkeys typically performed 1-3 FSs, which displaced the contour over the retina. Using eye position and a 2D analytical model to map the stimulus onto V1, we were able to compute FG modulation before and after each FS. On the spatial cortical scale, we found that, after each FS, FG modulation follows the stimulus retinal displacement and "hops" within the V1 retinotopic map, suggesting visual instability. On the temporal scale, FG modulation is initiated in the new retinotopic position before it disappeared from the old retinotopic position. Moreover, the FG modulation developed faster after an FS, compared with after stimulus onset, which may contribute to visual stability of FG segregation, along the timeline of stimulus presentation. Therefore, despite spatial discontinuity of FG modulation in V1, the higher-order stability of FG modulation along time may enable our stable and continuous perception.
P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.
P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less
Architecture of human translation initiation factor 3
Querol-Audi, Jordi; Sun, Chaomin; Vogan, Jacob M.; Smith, Duane; Gu, Yu; Cate, Jamie; Nogales, Eva
2013-01-01
SUMMARY Eukaryotic translation initiation factor 3 (eIF3) plays a central role in protein synthesis by organizing the formation of the 43S preinitiation complex. Using genetic tag visualization by electron microscopy, we reveal the molecular organization of ten human eIF3 subunits, including an octameric core. The structure of eIF3 bears a close resemblance to that of the proteasome lid, with a conserved spatial organization of eight core subunits containing PCI and MPN domains that coordinate functional interactions in both complexes. We further show that eIF3 subunits a and c interact with initiation factors eIF1 and eIF1A, which control the stringency of start codon selection. Finally, we find that subunit j, which modulates messenger RNA interactions with the small ribosomal subunit, makes multiple independent interactions with the eIF3 octameric core. These results highlight the conserved architecture of eIF3 and how it scaffolds key factors that control translation initiation in higher eukaryotes, including humans. PMID:23623729
Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching
Göschl, Florian; Engel, Andreas K.; Friese, Uwe
2014-01-01
Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102
The Effect of Non-Visual Working Memory Load on Top-Down Modulation of Visual Processing
ERIC Educational Resources Information Center
Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark
2009-01-01
While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of…
Wilaiprasitporn, Theerawit; Yagi, Tohru
2015-01-01
This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.
New Educational Modules Using a Cyber-Distribution System Testbed
Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching; ...
2018-03-30
At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less
The Fluids Integrated Rack and Light Microscopy Module Integrated Capabilities
NASA Technical Reports Server (NTRS)
Motil, Susan M.; Gati, Frank; Snead, John H.; Hill, Myron E.; Griffin, DeVon W.
2003-01-01
The Fluids Integrated Rack (FIR), a facility class payload, and the Light Microscopy Module (LMM), a subrack payload, are scheduled to be launched in 2005. The LMM integrated into the FIR will provide a unique platform for conducting fluids and biological experiments on ISS. The FIR is a modular, multi-user scientific research facility that will fly in the U.S. laboratory module, Destiny, of the International Space Station (ISS). The first payload in the FIR will be the Light Microscopy Module (LMM). The LMM is planned as a remotely controllable, automated, on-orbit microscope subrack facility, allowing flexible scheduling and control of fluids and biology experiments within the FIR. Key diagnostic capabilities for meeting science requirements include video microscopy to observe microscopic phenomena and dynamic interactions, interferometry to make thin film measurements with nanometer resolution, laser tweezers for particle manipulation, confocal microscopy to provide enhanced three-dimensional visualization of structures, and spectrophotometry to measure photonic properties of materials. The LMM also provides experiment sample containment for frangibles and fluids. This paper will provide a description of the current FIR and LMM designs, planned capabilities and key features. In addition a brief description of the initial five experiments planned for LMM/FIR will be provided.
New Educational Modules Using a Cyber-Distribution System Testbed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Jing; Bedoya, Juan Carlos; Liu, Chen-Ching
At Washington State University (WSU), a modern cyber-physical system testbed has been implemented based on an industry grade distribution management system (DMS) that is integrated with remote terminal units (RTUs), smart meters, and a solar photovoltaic (PV). In addition, the real model from the Avista Utilities distribution system in Pullman, WA, is modeled in DMS. The proposed testbed environment allows students and instructors to utilize these facilities for innovations in learning and teaching. For power engineering education, this testbed helps students understand the interaction between a cyber system and a physical distribution system through industrial level visualization. The testbed providesmore » a distribution system monitoring and control environment for students. Compared with a simulation based approach, the testbed brings the students' learning environment a step closer to the real world. The educational modules allow students to learn the concepts of a cyber-physical system and an electricity market through an integrated testbed. Furthermore, the testbed provides a platform in the study mode for students to practice working on a real distribution system model. Here, this paper describes the new educational modules based on the testbed environment. Three modules are described together with the underlying educational principles and associated projects.« less
NASA Astrophysics Data System (ADS)
Myers, Robert Gardner
1997-12-01
The purpose of this study was to determine whether there is a correlation between the cognitive style of field dependence and the type of visual presentation format used in a computer-based tutorial (color; black and white: or line drawings) when subjects are asked to identify human tissue samples. Two hundred-four college students enrolled in human anatomy and physiology classes at Westmoreland County Community College participated. They were first administered the Group Embedded Figures Test (GEFT) and then were divided into three groups: field-independent (score, 15-18), field-neutral (score, 11-14), and field dependent (score, 0-10). Subjects were randomly assigned to one of the three treatment groups. Instruction was delivered by means of a computer-aided tutorial consisting of text and visuals of human tissue samples. The pretest and posttest consisted of 15 tissue samples, five from each treatment, that were imported into the HyperCardsp{TM} stack and were played using QuickTimesp{TM} movie extensions. A two-way analysis of covariance (ANCOVA) using pretest and posttest scores was used to investigate whether there is a relationship between field dependence and each of the three visual presentation formats. No significant interaction was found between individual subject's relative degree of field dependence and any of the different visual presentation formats used in the computer-aided tutorial module, F(4,194) = 1.78, p =.1335. There was a significant difference between the students' levels of field dependence in terms of their ability to identify human tissue samples, F(2,194) = 5.83, p =.0035. Field-independent subjects scored significantly higher (M = 10.59) on the posttest than subjects who were field-dependent (M = 9.04). There was also a significant difference among the various visual presentation formats, F(2,194) = 3.78, p =.0245. Subjects assigned to the group that received the color visual presentation format scored significantly higher (M = 10.38) on the posttest measure than did those assigned to the group that received the line drawing visual presentation format (8.99).
Hotspot Endurance Of Solar-Cell Modules
NASA Technical Reports Server (NTRS)
Gonzalez, C. C.; Sugimura, R. S.; Ross, R. G., Jr.
1989-01-01
Procedure for evaluating modules for use with concentrators now available. Solar simulator illuminates photovoltaic cells through Fresnel lens of concentrator module. Module and test cells inspected visually at 24-h intervals during test and again when test completed. After test, electrical characteristics of module measured for comparison with pretest characteristics.
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses
NASA Astrophysics Data System (ADS)
Noh, Y. H.; Um, J. G.; Choi, Y.
2014-12-01
A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.
[Development and application of emergency medical information management system].
Wang, Fang; Zhu, Baofeng; Chen, Jianrong; Wang, Jian; Gu, Chaoli; Liu, Buyun
2011-03-01
To meet the needs of clinical practice of rescuing critical illness and develop the information management system of the emergency medicine. Microsoft Visual FoxPro, which is one of Microsoft's visual programming tool, is used to develop computer-aided system included the information management system of the emergency medicine. The system mainly consists of the module of statistic analysis, the module of quality control of emergency rescue, the module of flow path of emergency rescue, the module of nursing care in emergency rescue, and the module of rescue training. It can realize the system management of emergency medicine and,process and analyze the emergency statistical data. This system is practical. It can optimize emergency clinical pathway, and meet the needs of clinical rescue.
Modulation of visualized electrical field
NASA Astrophysics Data System (ADS)
Chuang, Chin-Jung; Wu, Chi-Chung; Wang, Yi-Ting; Huang, Shiuan-Hau
2015-10-01
Polarization is an important concept of electromagnetism, and polarizers were traditionally applied to demonstrate this concept in a laboratory. We set up a optical system with the optical component "axis finder" to visualize the polarization direction immediately. The light phenomena, such as birefringence, circular polarization, and Brewster's angle, can be examined polarization visually. In addition, the principle of different waveplate and optical axis can be presented in a straightforward approach. By means of image analysis, the great precision of polarizing direction can be measured up to 0.01 degree. Modulated polarized light is applied to a few optical devices, like Liquid-crystal display. It is marvelous to trace the light polarization between the backlight module, polarizer, and panel. As seeing is believing, the visualized electrical field allows educators to teach polarization in a smooth and strikingly manifest manner. Without any polarizer and analyzer, we examine the rotary power of different concentration syrup, presenting the relationship with polarization change. We also demonstrate the wide application of polarization light in modern life, and examine the principle through this visualized electrical field system.
Integrating conflict detection and attentional control mechanisms.
Walsh, Bong J; Buonocore, Michael H; Carter, Cameron S; Mangun, George R
2011-09-01
Human behavior involves monitoring and adjusting performance to meet established goals. Performance-monitoring systems that act by detecting conflict in stimulus and response processing have been hypothesized to influence cortical control systems to adjust and improve performance. Here we used fMRI to investigate the neural mechanisms of conflict monitoring and resolution during voluntary spatial attention. We tested the hypothesis that the ACC would be sensitive to conflict during attentional orienting and influence activity in the frontoparietal attentional control network that selectively modulates visual information processing. We found that activity in ACC increased monotonically with increasing attentional conflict. This increased conflict detection activity was correlated with both increased activity in the attentional control network and improved speed and accuracy from one trial to the next. These results establish a long hypothesized interaction between conflict detection systems and neural systems supporting voluntary control of visual attention.
Kraehenmann, Rainer; Schmidt, André; Friston, Karl; Preller, Katrin H.; Seifritz, Erich; Vollenweider, Franz X.
2015-01-01
Stimulation of serotonergic neurotransmission by psilocybin has been shown to shift emotional biases away from negative towards positive stimuli. We have recently shown that reduced amygdala activity during threat processing might underlie psilocybin's effect on emotional processing. However, it is still not known whether psilocybin modulates bottom-up or top-down connectivity within the visual-limbic-prefrontal network underlying threat processing. We therefore analyzed our previous fMRI data using dynamic causal modeling and used Bayesian model selection to infer how psilocybin modulated effective connectivity within the visual–limbic–prefrontal network during threat processing. First, both placebo and psilocybin data were best explained by a model in which threat affect modulated bidirectional connections between the primary visual cortex, amygdala, and lateral prefrontal cortex. Second, psilocybin decreased the threat-induced modulation of top-down connectivity from the amygdala to primary visual cortex, speaking to a neural mechanism that might underlie putative shifts towards positive affect states after psilocybin administration. These findings may have important implications for the treatment of mood and anxiety disorders. PMID:26909323
Functional significance of the emotion-related late positive potential
Brown, Stephen B. R. E.; van Steenbergen, Henk; Band, Guido P. H.; de Rover, Mischa; Nieuwenhuis, Sander
2012-01-01
The late positive potential (LPP) is an event-related potential (ERP) component over visual cortical areas that is modulated by the emotional intensity of a stimulus. However, the functional significance of this neural modulation remains elusive. We conducted two experiments in which we studied the relation between LPP amplitude, subsequent perceptual sensitivity to a non-emotional stimulus (Experiment 1) and visual cortical excitability, as reflected by P1/N1 components evoked by this stimulus (Experiment 2). During the LPP modulation elicited by unpleasant stimuli, perceptual sensitivity was not affected. In contrast, we found some evidence for a decreased N1 amplitude during the LPP modulation, a decreased P1 amplitude on trials with a relatively large LPP, and consistent negative (but non-significant) across-subject correlations between the magnitudes of the LPP modulation and corresponding changes in d-prime or P1/N1 amplitude. The results provide preliminary evidence that the LPP reflects a global inhibition of activity in visual cortex, resulting in the selective survival of activity associated with the processing of the emotional stimulus. PMID:22375117
Observations of solar-cell metallization corrosion
NASA Technical Reports Server (NTRS)
Mon, G. R.
1983-01-01
The Engineering Sciences Area of the Jet Propulsion Laboratory (JPL) Flat-Plate Solar Array Project is performing long term environmental tests on photovoltaic modules at Wyle Laboratories in Huntsville, Alabama. Some modules have been exposed to 85 C/85% RH and 40 C/93% RH for up to 280 days. Other modules undergoing temperature-only exposures ( 3% RH) at 85 C and 100 C have been tested for more than 180 days. At least two modules of each design type are exposed to each environment - one with, and the other without a 100-mA forward bias. Degradation is both visually observed and electrically monitored. Visual observations of changes in appearance are recorded at each inspection time. Significant visual observations relating to metallization corrosion (and/or metallization-induced corrosion) include discoloration (yellowing and browning) of grid lines, migration of grid line material into the encapsulation (blossoming), the appearance of rainbow-like diffraction patterns on the grid lines, and brown spots on collectors and grid lines. All of these observations were recorded for electrically biased modules in the 280-day tests with humidity.
Beyond Control Panels: Direct Manipulation for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Bradel, Lauren; North, Chris
2013-07-19
Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less
Cognitive regulation of saccadic velocity by reward prospect.
Chen, Lewis L; Hung, Leroy Y; Quinet, Julie; Kosek, Kevin
2013-08-01
It is known that expectation of reward speeds up saccades. Past studies have also shown the presence of a saccadic velocity bias in the orbit, resulting from a biomechanical regulation over varying eccentricities. Nevertheless, whether and how reward expectation interacts with the biomechanical regulation of saccadic velocities over varying eccentricities remains unknown. We addressed this question by conducting a visually guided double-step saccade task. The role of reward expectation was tested in monkeys performing two consecutive horizontal saccades, one associated with reward prospect and the other not. To adequately assess saccadic velocity and avoid adaptation, we systematically varied initial eye positions, saccadic directions and amplitudes. Our results confirmed the existence of a velocity bias in the orbit, i.e., saccadic peak velocity decreased linearly as the initial eye position deviated in the direction of the saccade. The slope of this bias increased as saccadic amplitudes increased. Nevertheless, reward prospect facilitated velocity to a greater extent for saccades away from than for saccades toward the orbital centre, rendering an overall reduction in the velocity bias. The rate (slope) and magnitude (intercept) of reward modulation over this velocity bias were linearly correlated with amplitudes, similar to the amplitude-modulated velocity bias without reward prospect, which presumably resulted from a biomechanical regulation. Small-amplitude (≤ 5°) saccades received little modulation. These findings together suggest that reward expectation modulated saccadic velocity not as an additive signal but as a facilitating mechanism that interacted with the biomechanical regulation. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Scientific Assistant Virtual Laboratory (SAVL)
NASA Astrophysics Data System (ADS)
Alaghband, Gita; Fardi, Hamid; Gnabasik, David
2007-03-01
The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.
Design, Implementation and Evaluation of an Indoor Navigation System for Visually Impaired People
Martinez-Sala, Alejandro Santos; Losilla, Fernando; Sánchez-Aarnoutse, Juan Carlos; García-Haro, Joan
2015-01-01
Indoor navigation is a challenging task for visually impaired people. Although there are guidance systems available for such purposes, they have some drawbacks that hamper their direct application in real-life situations. These systems are either too complex, inaccurate, or require very special conditions (i.e., rare in everyday life) to operate. In this regard, Ultra-Wideband (UWB) technology has been shown to be effective for indoor positioning, providing a high level of accuracy and low installation complexity. This paper presents SUGAR, an indoor navigation system for visually impaired people which uses UWB for positioning, a spatial database of the environment for pathfinding through the application of the A* algorithm, and a guidance module. The interaction with the user takes place using acoustic signals and voice commands played through headphones. The suitability of the system for indoor navigation has been verified by means of a functional and usable prototype through a field test with a blind person. In addition, other tests have been conducted in order to show the accuracy of different relevant parts of the system. PMID:26703610
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawle, Rachel A.; Hamerly, Timothy; Tripet, Brian P.
Studies of interspecies interactions are inherently difficult due to the complex mechanisms which enable these relationships. A model system for studying interspecies interactions is the marine hyperthermophiles Ignicoccus hospitalis and Nanoarchaeum equitans. Recent independently-conducted ‘omics’ analyses have generated insights into the molecular factors modulating this association. However, significant questions remain about the nature of the interactions between these archaea. We jointly analyzed multiple levels of omics datasets obtained from published, independent transcriptomics, proteomics, and metabolomics analyses. DAVID identified functionally-related groups enriched when I. hospitalis is grown alone or in co-culture with N. equitans. Enriched molecular pathways were subsequently visualized usingmore » interaction maps generated using STRING. Key findings of our multi-level omics analysis indicated that I. hospitalis provides precursors to N. equitans for energy metabolism. Analysis indicated an overall reduction in diversity of metabolic precursors in the I. hospitalis–N. equitans co-culture, which has been connected to the differential use of ribosomal subunits and was previously unnoticed. We also identified differences in precursors linked to amino acid metabolism, NADH metabolism, and carbon fixation, providing new insights into the metabolic adaptions of I. hospitalis enabling the growth of N. equitans. In conclusion, this multi-omics analysis builds upon previously identified cellular patterns while offering new insights into mechanisms that enable the I. hospitalis–N. equitans association. This study applies statistical and visualization techniques to a mixed-source omics dataset to yield a more global insight into a complex system, that was not readily discernable from separate omics studies.« less
Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George
2013-01-01
Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted. PMID:23729767
Sex hormonal modulation of interhemispheric transfer time.
Hausmann, M; Hamm, J P; Waldie, K E; Kirk, I J
2013-08-01
It is still a matter of debate whether functional cerebral asymmetries (FCA) of many cognitive processes are more pronounced in men than in women. Some evidence suggests that the apparent reduction in women's FCA is a result of the fluctuating levels of gonadal steroid hormones over the course of the menstrual cycle, making their FCA less static than for men. The degree of lateralization has been suggested to depend on interhemispheric communication that may be modulated by gonadal steroid hormones. Here, we employed visual-evoked EEG potentials to obtain a direct measure of interhemispheric communication during different phases of the menstrual cycle. The interhemispheric transfer time (IHTT) was estimated from the interhemispheric latency difference of the N170 component of the visual-evoked potential from either left or right visual field presentation. Nineteen right-handed women with regular menstrual cycles were tested twice, once during the menstrual phase, when progesterone and estradiol levels are low, and once during the luteal phase when progesterone and estradiol levels are high. Plasma steroid levels were determined by blood-based immunoassay at each session. It was found that IHTT, in particular from right-to-left, was generally longer during the luteal phase relative to the menstrual phase. This effect occurred as a consequence of a slowed absolute N170 latency of the indirect pathway (i.e. left hemispheric response after LVF stimulation) and, in particular, a shortened latency of the direct pathway (i.e. right hemispheric response after LVF stimulation) during the luteal phase. These results show that cycle-related effects are not restricted to modulation of processes between hemispheres but also apply to cortical interactions, especially within the right hemisphere. The findings support the view that plastic changes in the female brain occur during relatively short-term periods across the menstrual cycle. Copyright © 2013 Elsevier Ltd. All rights reserved.
Schaal, Nora K; Pfeifer, Jasmin; Krause, Vanessa; Pollok, Bettina
2015-11-01
Brain imaging studies highlighted structural differences in congenital amusia, a life-long perceptual disorder that is associated with pitch perception and pitch memory deficits. A functional anomaly characterized by decreased low gamma oscillations (30-40 Hz range) in the right dorsolateral prefrontal cortex (DLPFC) during pitch memory has been revealed recently. Thus, the present study investigates whether applying transcranial alternating current stimulation (tACS) at 35 Hz to the right DLPFC would improve pitch memory. Nine amusics took part in two tACS sessions (either 35 Hz or 90 Hz) and completed a pitch and visual memory task before and during stimulation. 35 Hz stimulation facilitated pitch memory significantly. No modulation effects were found with 90 Hz stimulation or on the visual task. While amusics showed a selective impairment of pitch memory before stimulation, the performance during 35 Hz stimulation was not significantly different to healthy controls anymore. Taken together, the study shows that modulating the right DLPFC with 35 Hz tACS in congenital amusia selectively improves pitch memory performance supporting the hypothesis that decreased gamma oscillations within the DLPFC are causally involved in disturbed pitch memory and highlight the potential use of tACS to interact with cognitive processes. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Sanspree, M. J.; And Others
1991-01-01
This article describes the Vision Outreach Project--a pilot project of the University of Alabama at Birmingham for training teachers of visually impaired students. The project produced video modules to provide distance education in rural and urban areas. The modules can be used to complete degree requirements or in-service training and continuing…
NASA Astrophysics Data System (ADS)
Reed, S. E.; Kreylos, O.; Hsi, S.; Kellogg, L. H.; Schladow, G.; Yikilmaz, M. B.; Segale, H.; Silverman, J.; Yalowitz, S.; Sato, E.
2014-12-01
One of the challenges involved in learning earth science is the visualization of processes which occur over large spatial and temporal scales. Shaping Watersheds is an interactive 3D exhibit developed with support from the National Science Foundation by a team of scientists, science educators, exhibit designers, and evaluation professionals, in an effort to improve public understanding and stewardship of freshwater ecosystems. The hands-on augmented reality sandbox allows users to create topographic models by shaping real "kinetic" sand. The exhibit is augmented in real time by the projection of a color elevation map and contour lines which exactly match the sand topography, using a closed loop of a Microsoft Kinect 3D camera, simulation and visualization software, and a data projector. When an object (such as a hand) is sensed at a particular height above the sand surface, virtual rain appears as a blue visualization on the surface and a flow simulation (based on a depth-integrated version of the Navier-Stokes equations) moves the water across the landscape. The blueprints and software to build the sandbox are freely available online (http://3dh2o.org/71/) under the GNU General Public License, together with a facilitator's guide and a public forum (with how-to documents and FAQs). Using these resources, many institutions (20 and counting) have built their own exhibits to teach a wide variety of topics (ranging from watershed stewardship, hydrology, geology, topographic map reading, and planetary science) in a variety of venues (such as traveling science exhibits, K-12 schools, university earth science departments, and museums). Additional exhibit extensions and learning modules are planned such as tsunami modeling and prediction. Moreover, a study is underway at the Lawrence Hall of Science to assess how various aspects of the sandbox (such as visualization color scheme and level of interactivity) affect understanding of earth science concepts.
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Influence of early attentional modulation on working memory
Gazzaley, Adam
2011-01-01
It is now established that attention influences working memory (WM) at multiple processing stages. This liaison between attention and WM poses several interesting empirical questions. Notably, does attention impact WM via its influences on early perceptual processing? If so, what are the critical factors at play in this attention-perception-WM interaction. I review recent data from our laboratory utilizing a variety of techniques (electroencephalography (EEG), functional MRI (fMRI) and transcranial magnetic stimulation (TMS)), stimuli (features and complex objects), novel experimental paradigms, and research populations (younger and older adults), which converge to support the conclusion that top-down modulation of visual cortical activity at early perceptual processing stages (100–200 ms after stimulus onset) impacts subsequent WM performance. Factors that affect attentional control at this stage include cognitive load, task practice, perceptual training, and aging. These developments highlight the complex and dynamic relationships among perception, attention, and memory. PMID:21184764
PuMA: the Porous Microstructure Analysis software
NASA Astrophysics Data System (ADS)
Ferguson, Joseph C.; Panerai, Francesco; Borner, Arnaud; Mansour, Nagi N.
2018-01-01
The Porous Microstructure Analysis (PuMA) software has been developed in order to compute effective material properties and perform material response simulations on digitized microstructures of porous media. PuMA is able to import digital three-dimensional images obtained from X-ray microtomography or to generate artificial microstructures. PuMA also provides a module for interactive 3D visualizations. Version 2.1 includes modules to compute porosity, volume fractions, and surface area. Two finite difference Laplace solvers have been implemented to compute the continuum tortuosity factor, effective thermal conductivity, and effective electrical conductivity. A random method has been developed to compute tortuosity factors from the continuum to rarefied regimes. Representative elementary volume analysis can be performed on each property. The software also includes a time-dependent, particle-based model for the oxidation of fibrous materials. PuMA was developed for Linux operating systems and is available as a NASA software under a US & Foreign release.
Kindermann, Nicole K; Werner, Natalie S
2014-12-01
Mental stress evokes several physiological responses such as the acceleration of heart rate, increase of electrodermal activity and the release of adrenaline. Moreover, physiological stress responses interact with emotional and behavioral stress responses. In the present study we provide evidence that viscero-sensory feedback from the heart (cardiac perception) is an important factor modulating emotional and cognitive stress responses. In our study, we compared participants with high versus low cardiac perception using a computerized mental stress task, in which they had to respond to rapidly presented visual and acoustic stimuli. Additionally, we assessed physiological responses (heart rate, skin conductance). Participants high in cardiac perception reported more negative emotions and showed worse task performance under the stressor than participants low in cardiac perception. These results were not moderated by physiological responses. We conclude that cardiac perception modulates stress responses by intensifying negative emotions and by impairing cognitive performance.
Apparatus and method for interaction phenomena with world modules in data-flow-based simulation
Xavier, Patrick G [Albuquerque, NM; Gottlieb, Eric J [Corrales, NM; McDonald, Michael J [Albuquerque, NM; Oppel, III, Fred J.
2006-08-01
A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.
Szabo, Miruna; Deco, Gustavo; Fusi, Stefano; Del Giudice, Paolo; Mattia, Maurizio; Stetter, Martin
2006-05-01
Recent experiments on behaving monkeys have shown that learning a visual categorization task makes the neurons in infero-temporal cortex (ITC) more selective to the task-relevant features of the stimuli (Sigala and Logothetis in Nature 415 318-320, 2002). We hypothesize that such a selectivity modulation emerges from the interaction between ITC and other cortical area, presumably the prefrontal cortex (PFC), where the previously learned stimulus categories are encoded. We propose a biologically inspired model of excitatory and inhibitory spiking neurons with plastic synapses, modified according to a reward based Hebbian learning rule, to explain the experimental results and test the validity of our hypothesis. We assume that the ITC neurons, receiving feature selective inputs, form stronger connections with the category specific neurons to which they are consistently associated in rewarded trials. After learning, the top-down influence of PFC neurons enhances the selectivity of the ITC neurons encoding the behaviorally relevant features of the stimuli, as observed in the experiments. We conclude that the perceptual representation in visual areas like ITC can be strongly affected by the interaction with other areas which are devoted to higher cognitive functions.
Differential contribution of early visual areas to the perceptual process of contour processing.
Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A
2004-04-01
We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.
Ben-Simon, Eti; Podlipsky, Ilana; Okon-Singer, Hadas; Gruberger, Michal; Cvetkovic, Dean; Intrator, Nathan; Hendler, Talma
2013-03-01
The unique role of the EEG alpha rhythm in different states of cortical activity is still debated. The main theories regarding alpha function posit either sensory processing or attention allocation as the main processes governing its modulation. Closing and opening eyes, a well-known manipulation of the alpha rhythm, could be regarded as attention allocation from inward to outward focus though during light is also accompanied by visual change. To disentangle the effects of attention allocation and sensory visual input on alpha modulation, 14 healthy subjects were asked to open and close their eyes during conditions of light and of complete darkness while simultaneous recordings of EEG and fMRI were acquired. Thus, during complete darkness the eyes-open condition is not related to visual input but only to attention allocation, allowing direct examination of its role in alpha modulation. A data-driven ridge regression classifier was applied to the EEG data in order to ascertain the contribution of the alpha rhythm to eyes-open/eyes-closed inference in both lighting conditions. Classifier results revealed significant alpha contribution during both light and dark conditions, suggesting that alpha rhythm modulation is closely linked to the change in the direction of attention regardless of the presence of visual sensory input. Furthermore, fMRI activation maps derived from an alpha modulation time-course during the complete darkness condition exhibited a right frontal cortical network associated with attention allocation. These findings support the importance of top-down processes such as attention allocation to alpha rhythm modulation, possibly as a prerequisite to its known bottom-up processing of sensory input. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Estrogen-cholinergic interactions: Implications for cognitive aging.
Newhouse, Paul; Dumas, Julie
2015-08-01
This article is part of a Special Issue "Estradiol and Cognition". While many studies in humans have investigated the effects of estrogen and hormone therapy on cognition, potential neurobiological correlates of these effects have been less well studied. An important site of action for estrogen in the brain is the cholinergic system. Several decades of research support the critical role of CNS cholinergic systems in cognition in humans, particularly in learning and memory formation and attention. In humans, the cholinergic system has been implicated in many aspects of cognition including the partitioning of attentional resources, working memory, inhibition of irrelevant information, and improved performance on effort-demanding tasks. Studies support the hypothesis that estradiol helps to maintain aspects of attention and verbal and visual memory. Such cognitive domains are exactly those modulated by cholinergic systems and extensive basic and preclinical work over the past several decades has clearly shown that basal forebrain cholinergic systems are dependent on estradiol support for adequate functioning. This paper will review recent human studies from our laboratories and others that have extended preclinical research examining estrogen-cholinergic interactions to humans. Studies examined include estradiol and cholinergic antagonist reversal studies in normal older women, examinations of the neural representations of estrogen-cholinergic interactions using functional brain imaging, and studies of the ability of selective estrogen receptor modulators such as tamoxifen to interact with cholinergic-mediated cognitive performance. We also discuss the implications of these studies for the underlying hypotheses of cholinergic-estrogen interactions and cognitive aging, and indications for prophylactic and therapeutic potential that may exploit these effects. Published by Elsevier Inc.