Explaining seeing? Disentangling qualia from perceptual organization.
Ibáñez, Agustin; Bekinschtein, Tristan
2010-09-01
Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.
Vision Problems and Reduced Reading Outcomes in Queensland Schoolchildren.
Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M
2017-03-01
To assess the relationship between vision and reading outcomes in Indigenous and non-Indigenous schoolchildren to determine whether vision problems are associated with lower reading outcomes in these populations. Vision testing and reading assessments were performed on 508 Indigenous and non-Indigenous schoolchildren in Queensland, Australia divided into two age groups: Grades 1 and 2 (6-7 years of age) and Grades 6 and 7 (12-13 years of age). Vision parameters measured included cycloplegic refraction, near point of convergence, heterophoria, fusional vergence range, rapid automatized naming, and visual motor integration. The following vision conditions were then classified based on the vision findings: uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration. Reading accuracy and reading comprehension were measured with the Neale reading test. The effect of uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration on reading accuracy and reading comprehension were investigated with ANCOVAs. The ANCOVAs explained a significant proportion of variance in both reading accuracy and reading comprehension scores in both age groups, with 40% of the variation in reading accuracy and 33% of the variation in reading comprehension explained in the younger age group, and 27% and 10% of the variation in reading accuracy and reading comprehension, respectively, in the older age group. The vision parameters of visual motor integration and rapid automatized naming were significant predictors in all ANCOVAs (P < .01). The direction of the relationship was such that reduced reading results were explained by reduced visual motor integration and rapid automatized naming results. Both reduced rapid automatized naming and visual motor integration were associated with poorer reading outcomes in Indigenous and non-Indigenous children. This is an important finding given the recent emphasis placed on Indigenous children's reading skills and the fact that reduced rapid automatized naming and visual motor integration skills are more common in this group.
Memisevic, Haris; Sinanovic, Osman
2013-12-01
The goal of this study was to assess the relationship between visual-motor integration and executive functions, and in particular, the extent to which executive functions can predict visual-motor integration skills in children with intellectual disability. The sample consisted of 90 children (54 boys, 36 girls; M age = 11.3 yr., SD = 2.7, range 7-15) with intellectual disabilities of various etiologies. The measure of executive functions were 8 subscales of the Behavioral Rating Inventory of Executive Function (BRIEF) consisting of Inhibition, Shifting, Emotional Control, Initiating, Working memory, Planning, Organization of material, and Monitoring. Visual-motor integration was measured with the Acadia test of visual-motor integration (VMI). Regression analysis revealed that BRIEF subscales explained 38% of the variance in VMI scores. Of all the BRIEF subscales, only two were statistically significant predictors of visual-motor integration: Working memory and Monitoring. Possible implications of this finding are further elaborated.
Carlson, Abby G; Rowe, Ellen; Curby, Timothy W
2013-01-01
Recent research has established a connection between children's fine motor skills and their academic performance. Previous research has focused on fine motor skills measured prior to elementary school, while the present sample included children ages 5-18 years old, making it possible to examine whether this link remains relevant throughout childhood and adolescence. Furthermore, the majority of research linking fine motor skills and academic achievement has not determined which specific components of fine motor skill are driving this relation. The few studies that have looked at associations of separate fine motor tasks with achievement suggest that copying tasks that tap visual-spatial integration skills are most closely related to achievement. The present study examined two separate elements of fine motor skills--visual-motor coordination and visual-spatial integration--and their associations with various measures of academic achievement. Visual-motor coordination was measured using tracing tasks, while visual-spatial integration was measured using copy-a-figure tasks. After controlling for gender, socioeconomic status, IQ, and visual-motor coordination, and visual-spatial integration explained significant variance in children's math and written expression achievement. Knowing that visual-spatial integration skills are associated with these two achievement domains suggests potential avenues for targeted math and writing interventions for children of all ages.
Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?
ERIC Educational Resources Information Center
Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde
2012-01-01
In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Visual search, visual streams, and visual architectures.
Green, M
1991-10-01
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.
A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.
Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E
2018-06-20
Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.
Visual Literacy: Implications for the Production of Children's Television Programs.
ERIC Educational Resources Information Center
Amey, L. J.
Visual literacy, the integration of seeing with other cognitive processes, is an essential tool of learning. To explain the relationship between the perceiver and the perceived, three types of theories can be brought to bear: introverted; extroverted; and transactional. Franklin Fearing, George Herbert Mead, Martin Buber, and other theorists have…
Modeling the role of parallel processing in visual search.
Cave, K R; Wolfe, J M
1990-04-01
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.
Visual feature integration theory: past, present, and future.
Quinlan, Philip T
2003-09-01
Visual feature integration theory was one of the most influential theories of visual information processing in the last quarter of the 20th century. This article provides an exposition of the theory and a review of the associated data. In the past much emphasis has been placed on how the theory explains performance in various visual search tasks. The relevant literature is discussed and alternative accounts are described. Amendments to the theory are also set out. Many other issues concerning internal processes and representations implicated by the theory are reviewed. The article closes with a synopsis of what has been learned from consideration of the theory, and it is concluded that some of the issues may remain intractable unless appropriate neuroscientific investigations are carried out.
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Magnotti, John F; Beauchamp, Michael S
2017-02-01
Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.
Superior haptic-to-visual shape matching in autism spectrum disorders.
Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru
2012-04-01
A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Bachmann, Talis
2015-01-01
Perceptual phenomena such as spatio-temporal illusions and masking are typically explained by psychological (cognitive) processing theories or large-scale neural theories involving inter-areal connectivity and neural circuits comprising of hundreds or more interconnected single cells. Subcellular mechanisms are hardly used for such purpose. Here, a mechanistic theoretical view is presented on how a subcellular brain mechanism of integration of presynaptic signals that arrive at different compartments of layer-5 pyramidal neurons could explain a couple of spatiotemporal visual-phenomenal effects unfolding along very brief time intervals within the range of the sub-second temporal scale.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Chan, Louis K H; Hayward, William G
2009-02-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.
Jellema, Tjeerd; Maassen, Gerard; Perrett, David I
2004-07-01
This study investigated the cellular mechanisms in the anterior part of the superior temporal sulcus (STSa) that underlie the integration of different features of the same visually perceived animate object. Three visual features were systematically manipulated: form, motion and location. In 58% of a population of cells selectively responsive to the sight of a walking agent, the location of the agent significantly influenced the cell's response. The influence of position was often evident in intricate two- and three-way interactions with the factors form and/or motion. For only one of the 31 cells tested, the response could be explained by just a single factor. For all other cells at least two factors, and for half of the cells (52%) all three factors, played a significant role in controlling responses. Our findings support a reformulation of the Ungerleider and Mishkin model, which envisages a subdivision of the visual processing into a ventral 'what' and a dorsal 'where' stream. We demonstrated that at least part of the temporal cortex ('what' stream) makes ample use of visual spatial information. Our findings open up the prospect of a much more elaborate integration of visual properties of animate objects at the single cell level. Such integration may support the comprehension of animals and their actions.
Chen, Fu-Chen; Chen, Hsin-Lin; Tu, Jui-Hung; Tsai, Chia-Liang
2015-09-01
People often multi-task in their daily life. However, the mechanisms for the interaction between simultaneous postural and non-postural tasks have been controversial over the years. The present study investigated the effects of light digital touch on both postural sway and visual search accuracy for the purpose of assessing two hypotheses (functional integration and resource competition), which may explain the interaction between postural sway and the performance of a non-postural task. Participants (n=42, 20 male and 22 female) were asked to inspect a blank sheet of paper or visually search for target letters in a text block while a fingertip was in light contact with a stable surface (light touch, LT), or with both arms hanging at the sides of the body (no touch, NT). The results showed significant main effects of LT on reducing the magnitude of postural sway as well as enhancing visual search accuracy compared with the NT condition. The findings support the hypothesis of function integration, demonstrating that the modulation of postural sway can be modulated to improve the performance of a visual search task. Copyright © 2015 Elsevier B.V. All rights reserved.
The impact of representation format and task instruction on student understanding in science
NASA Astrophysics Data System (ADS)
Stephenson, Susan Raatz
The purpose of this study is to examine how representation format and task instructions impact student learning in a science domain. Learning outcomes were assessed via measures of mental model, declarative knowledge, and knowledge inference. Students were asked to use one of two forms of representation, either drawing or writing, during study of a science text. Further, instructions (summarize vs. explain) were varied to determine if students' intended use of the presentation influenced learning. Thus, this study used a 2 (drawing vs. writing) X 2 (summarize vs. explain) between-subjects design. Drawing was hypothesized to require integration across learning materials regardless of task instructions, because drawings (by definition) require learners to integrate new information into a visual representation. Learning outcomes associated with writing were hypothesized to depend upon task instructions: when asked to summarize, writing should result in reproduction of text; when asked to explain, writing should emphasize integration processes. Because integration processes require connecting and analyzing new and prior information, it also was predicted that drawing (across both conditions of task instructions) and writing (when combined the explain task instructions only) would result in increased metacognitive monitoring. Metacognitive monitoring was assessed indirectly via responses to metacognitive prompts interspersed throughout the study.
Modeling the Round Earth through Diagrams
NASA Astrophysics Data System (ADS)
Padalkar, Shamin; Ramadas, Jayashree
Earlier studies have found that students, including adults, have problems understanding the scientifically accepted model of the Sun-Earth-Moon system and explaining day-to-day astronomical phenomena based on it. We have been examining such problems in the context of recent research on visual-spatial reasoning. Working with middle school students in India, we have developed a pedagogical sequence to build the mental model of the Earth and tried it in three schools for socially and educationally disadvantaged students. This pedagogy was developed on the basis of (1) a reading of current research in imagery and visual-spatial reasoning and (2) students' difficulties identified during the course of pretests and interviews. Visual-spatial tools such as concrete (physical) models, gestures, and diagrams are used extensively in the teaching sequence. The building of a mental model is continually integrated with drawing inferences to understand and explain everyday phenomena. The focus of this article is inferences drawn with diagrams.
On the rules of integration of crowded orientation signals
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target–flanker orientation difference (a drop at intermediate differences). For small target–flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity–heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment. PMID:23145295
On the rules of integration of crowded orientation signals.
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target-flanker orientation difference (a drop at intermediate differences). For small target-flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity-heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment.
An Integrative Framework of Stress, Attention, and Visuomotor Performance
Vine, Samuel J.; Moore, Lee J.; Wilson, Mark R.
2016-01-01
The aim of this article is to present an integrative conceptual framework that depicts the effect of acute stress on the performance of visually guided motor skills. We draw upon seminal theories highlighting the importance of subjective interpretations of stress on subsequent performance and outline how models of disrupted attentional control might explain this effect through impairments in visuomotor control. We first synthesize and critically discuss empirical support for theories examining these relationships in isolation. We then outline our integrative framework that seeks to provide a more complete picture of the interacting influences of stress responses (challenge and threat) and attention in explaining how elevated stress may lead to different visuomotor performance outcomes. We propose a number of mechanisms that explain why evaluations of stress are related to attentional control, and highlight the emotion of anxiety as the most likely candidate to explain why negative reactions to stress lead to disrupted attention and poor visuomotor skill performance. Finally, we propose a number of feedback loops that explain why stress responses are often self-perpetuating, as well as a number of proposed interventions that are designed to help improve or maintain performance in real world performance environments (e.g., sport, surgery, military, and aviation). PMID:27847484
Using web-based animations to teach histology.
Brisbourne, Marc A S; Chin, Susan S-L; Melnyk, Erica; Begg, David A
2002-02-15
We have been experimenting with the use of animations to teach histology as part of an interactive multimedia program we are developing to replace the traditional lecture/laboratory-based histology course in our medical and dental curricula. This program, called HistoQuest, uses animations to illustrate basic histologic principles, explain dynamic processes, integrate histologic structure with physiological function, and assist students in forming mental models with which to organize and integrate new information into their learning. With this article, we first briefly discuss the theory of mental modeling, principles of visual presentation, and how mental modeling and visual presentation can be integrated to create effective animations. We then discuss the major Web-based animation technologies that are currently available and their suitability for different visual styles and navigational structures. Finally, we describe the process we use to produce animations for our program. The approach described in this study can be used by other developers to create animations for delivery over the Internet for the teaching of histology.
Haldin, Charlotte; Nymark, Soile; Aho, Ann-Christine; Koskelainen, Ari; Donner, Kristian
2009-05-06
Human vision is approximately 10 times less sensitive than toad vision on a cool night. Here, we investigate (1) how far differences in the capacity for temporal integration underlie such differences in sensitivity and (2) whether the response kinetics of the rod photoreceptors can explain temporal integration at the behavioral level. The toad was studied as a model that allows experimentation at different body temperatures. Sensitivity, integration time, and temporal accuracy of vision were measured psychophysically by recording snapping at worm dummies moving at different velocities. Rod photoresponses were studied by ERG recording across the isolated retina. In both types of experiments, the general timescale of vision was varied by using two temperatures, 15 and 25 degrees C. Behavioral integration times were 4.3 s at 15 degrees C and 0.9 s at 25 degrees C, and rod integration times were 4.2-4.3 s at 15 degrees C and 1.0-1.3 s at 25 degrees C. Maximal behavioral sensitivity was fivefold lower at 25 degrees C than at 15 degrees C, which can be accounted for by inability of the "warm" toads to integrate light over longer times than the rods. However, the long integration time at 15 degrees C, allowing high sensitivity, degraded the accuracy of snapping toward quickly moving worms. We conclude that temporal integration explains a considerable part of all variation in absolute visual sensitivity. The strong correlation between rods and behavior suggests that the integration time of dark-adapted vision is set by rod phototransduction at the input to the visual system. This implies that there is an inexorable trade-off between temporal integration and resolution.
Brito, Pedro N; Gomes, Nuno L; Vieira, Marco P; Faria, Pedro A; Fernandes, Augusto V; Rocha-Sousa, Amândio; Falcão-Reis, Fernando
2014-02-01
To study the potential association between fundus autofluorescence, spectral-domain optical coherence tomography, and visual acuity in patients undergoing surgery because of epiretinal membranes. Prospective, interventional case series including 26 patients submitted to vitrectomy because of symptomatic epiretinal membranes. Preoperative evaluation consisted of a complete ophthalmologic examination, autofluorescence, and spectral-domain optical coherence tomography. Studied variables included foveal autofluorescence (fov.AF), photoreceptor inner segment/outer segment (IS/OS) junction line integrity, external limiting membrane integrity, central foveal thickness, and foveal morphology. All examinations were repeated at the first, third, and sixth postoperative months. The main outcome measures were logarithm of minimal angle resolution visual acuity, fov.AF integrity, and IS/OS integrity. All cases showing a continuous IS/OS line had an intact fov.AF, whereas patients with IS/OS disruption could have either an increased area of foveal hypoautofluorescence or an intact fov.AF, with the latter being associated with IS/OS integrity recovery in follow-up spectral-domain optical coherence tomography imaging. The only preoperative variables presenting a significant correlation with final visual acuity were baseline visual acuity (P = 0.047) and fov.AF grade (P = 0.023). Recovery of IS/OS line integrity after surgery, in patients with preoperative IS/OS disruption and normal fov.AF, can be explained by the presence of a functional retinal pigment epithelium-photoreceptor complex, supporting normal photoreceptor activity. Autofluorescence imaging provides a functional component to the study of epiretinal membranes, complementing the structural information obtained with optical coherence tomography.
Modelling audiovisual integration of affect from videos and music.
Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V
2018-05-01
Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.
Model-based analysis of pattern motion processing in mouse primary visual cortex
Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.
2015-01-01
Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738
The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J
2014-04-01
Contemporary data on visual memory and learning in survivors born extremely preterm (EP; <28 weeks gestation) or with extremely low birth weight (ELBW; <1,000 g) are lacking. Geographically determined cohort study of 298 consecutive EP/ELBW survivors born in 1991 and 1992, and 262 randomly selected normal-birth-weight controls. Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ <70. Male EP/ELBW adolescents or those treated with corticosteroids had poorer outcomes. EP/ELBW adolescents have poorer visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.
Late development of cue integration is linked to sensory fusion in cortex.
Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko
2015-11-02
Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex
Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko
2015-01-01
Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841
Family Literacy and Global Literature: Parents as Partners in the Common Core
ERIC Educational Resources Information Center
Truesdell, Kim S.; del Prado Hill, Pixita
2015-01-01
This "Teaching Tip" will explain an initiative, Global Book Hour, that engages families in a global literature read aloud. GBH integrates social studies, the visual and performing arts, healthy eating, and vocabulary development as children travel the world through high quality global children's literature. The program is a…
Reward modulates the effect of visual cortical microstimulation on perceptual decisions
Cicmil, Nela; Cumming, Bruce G; Parker, Andrew J; Krug, Kristine
2015-01-01
Effective perceptual decisions rely upon combining sensory information with knowledge of the rewards available for different choices. However, it is not known where reward signals interact with the multiple stages of the perceptual decision-making pathway and by what mechanisms this may occur. We combined electrical microstimulation of functionally specific groups of neurons in visual area V5/MT with performance-contingent reward manipulation, while monkeys performed a visual discrimination task. Microstimulation was less effective in shifting perceptual choices towards the stimulus preferences of the stimulated neurons when available reward was larger. Psychophysical control experiments showed this result was not explained by a selective change in response strategy on microstimulated trials. A bounded accumulation decision model, applied to analyse behavioural performance, revealed that the interaction of expected reward with microstimulation can be explained if expected reward modulates a sensory representation stage of perceptual decision-making, in addition to the better-known effects at the integration stage. DOI: http://dx.doi.org/10.7554/eLife.07832.001 PMID:26402458
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
Bogousslavsky, J; Miklossy, J; Deruaz, J P; Assal, G; Regli, F
1987-01-01
A macular-sparing superior altitudinal hemianopia with no visuo-psychic disturbance, except impaired visual learning, was associated with bilateral ischaemic necrosis of the lingual gyrus and only partial involvement of the fusiform gyrus on the left side. It is suggested that bilateral destruction of the lingual gyrus alone is not sufficient to affect complex visual processing. The fusiform gyrus probably has a critical role in colour integration, visuo-spatial processing, facial recognition and corresponding visual imagery. Involvement of the occipitotemporal projection system deep to the lingual gyri probably explained visual memory dysfunction, by a visuo-limbic disconnection. Impaired verbal memory may have been due to posterior involvement of the parahippocampal gyrus and underlying white matter, which may have disconnected the intact speech areas from the left medial temporal structures. Images PMID:3585386
Maestro: Work Together as a Team and Think Like the Reader.
ERIC Educational Resources Information Center
Nelson, Terry
2003-01-01
Explains that the maestro concept is an approach to integrating writing, editing, and visual communication. Notes that the goal of the maestro concept is to find agreement on the best way to package a story, photos, art, and graphics for the reader. Concludes that the maestro team helps beginning journalism students become better verbal and visual…
A recurrent neural model for proto-object based contour integration and figure-ground segregation.
Hu, Brian; Niebur, Ernst
2017-12-01
Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R
2018-02-14
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.
Molloy, Carly S; Di Battista, Ashley M; Anderson, Vicki A; Burnett, Alice; Lee, Katherine J; Roberts, Gehan; Cheong, Jeanie Ly; Anderson, Peter J; Doyle, Lex W
2017-04-01
Children born extremely preterm (EP, <28 weeks) and/or extremely low birth weight (ELBW, <1000 g) have more academic deficiencies than their term-born peers, which may be due to problems with visual processing. The aim of this study is to determine (1) if visual processing is related to poor academic outcomes in EP/ELBW adolescents, and (2) how much of the variance in academic achievement in EP/ELBW adolescents is explained by visual processing ability after controlling for perinatal risk factors and other known contributors to academic performance, particularly attention and working memory. A geographically determined cohort of 228 surviving EP/ELBW adolescents (mean age 17 years) was studied. The relationships between measures of visual processing (visual acuity, binocular stereopsis, eye convergence, and visual perception) and academic achievement were explored within the EP/ELBW group. Analyses were repeated controlling for perinatal and social risk, and measures of attention and working memory. It was found that visual acuity, convergence and visual perception are related to scores for academic achievement on univariable regression analyses. After controlling for potential confounds (perinatal and social risk, working memory and attention), visual acuity, convergence and visual perception remained associated with reading and math computation, but only convergence and visual perception are related to spelling. The additional variance explained by visual processing is up to 6.6% for reading, 2.7% for spelling, and 2.2% for math computation. None of the visual processing variables or visual motor integration are associated with handwriting on multivariable analysis. Working memory is generally a stronger predictor of reading, spelling, and math computation than visual processing. It was concluded that visual processing difficulties are significantly related to academic outcomes in EP/ELBW adolescents; therefore, specific attention should be paid to academic remediation strategies incorporating the management of working memory and visual processing in EP/ELBW children.
From Visual Exploration to Storytelling and Back Again.
Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M
2016-06-01
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).
From Visual Exploration to Storytelling and Back Again
Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.
2016-01-01
The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091
Acquired prior knowledge modulates audiovisual integration.
Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A
2010-05-01
Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Towards the quantitative evaluation of visual attention models.
Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K
2015-11-01
Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Keane, Brian P; Paterno, Danielle; Kastner, Sabine; Silverstein, Steven M
2016-05-01
Visual integration dysfunction characterizes schizophrenia, but prior studies have not yet established whether the problem arises by the first psychotic episode or worsens with illness duration. To investigate the issue, we compared chronic schizophrenia patients (SZs), first episode psychosis patients (FEs), and well-matched healthy controls on a brief but sensitive psychophysical task in which subjects attempted to locate an integrated shape embedded in noise. Task difficulty depended on the number of noise elements co-presented with the shape. For half of the experiment, the entire display was scaled down in size to produce a high spatial frequency (HSF) condition, which has been shown to worsen patient integration deficits. Catch trials-in which the circular target appeared without noise-were also added so as to confirm that subjects were paying adequate attention. We found that controls integrated contours under noisier conditions than FEs, who, in turn, integrated better than SZs. These differences, which were at times large in magnitude (d = 1.7), clearly emerged only for HSF displays. Catch trial accuracy was above 95% for each group and could not explain the foregoing differences. Prolonged illness duration predicted poorer HSF integration across patients, but age had little effect on controls, indicating that the former factor was driving the effect in patients. Taken together, a brief psychophysical task efficiently demonstrates large visual integration impairments in schizophrenia. The deficit arises by the first psychotic episode, worsens with illness duration, and may serve as a biomarker of illness progression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Temporal Target Integration Underlies Performance at Lag 1 in the Attentional Blink
ERIC Educational Resources Information Center
Akyurek, Elkan G.; Eshuis, Sander A. H.; Nieuwenstein, Mark R.; Saija, Jefta D.; Baskent, Deniz; Hommel, Bernhard
2012-01-01
When two targets follow each other directly in rapid serial visual presentation (RSVP), they are often identified correctly but reported in the wrong order. These order reversals are commonly explained in terms of the rate at which the two targets are processed, the idea being that the second target can sometimes overtake the first in the race…
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Computer-Based Tools for Inquiry in Undergraduate Classrooms: Results from the VGEE
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Bramer, D. J.; Elliott, D.; Hay, K. E.; Mallaiahgari, L.; Marlino, M. R.; Middleton, D.; Ramamurhty, M. K.; Scheitlin, T.; Weingroff, M.; Wilhelmson, R.; Yoder, J.
2002-05-01
The Visual Geophysical Exploration Environment (VGEE) is a suite of computer-based tools designed to help learners connect observable, large-scale geophysical phenomena to underlying physical principles. Technologically, this connection is mediated by java-based interactive tools: a multi-dimensional visualization environment, authentic scientific data-sets, concept models that illustrate fundamental physical principles, and an interactive web-based work management system for archiving and evaluating learners' progress. Our preliminary investigations showed, however, that the tools alone are not sufficient to empower undergraduate learners; learners have trouble in organizing inquiry and using the visualization tools effectively. To address these issues, the VGEE includes an inquiry strategy and scaffolding activities that are similar to strategies used successfully in K-12 classrooms. The strategy is organized around the steps: identify, relate, explain, and integrate. In the first step, students construct visualizations from data to try to identify salient features of a particular phenomenon. They compare their previous conceptions of a phenomenon to the data examine their current knowledge and motivate investigation. Next, students use the multivariable functionality of the visualization environment to relate the different features they identified. Explain moves the learner temporarily outside the visualization to the concept models, where they explore fundamental physical principles. Finally, in integrate, learners use these fundamental principles within the visualization environment by literally placing the concept model within the visualization environment as a probe and watching it respond to larger-scale patterns. This capability, unique to the VGEE, addresses the disconnect that novice learners often experience between fundamental physics and observable phenomena. It also allows learners the opportunity to reflect on and refine their knowledge as well as anchor it within a context for long-term retention. We are implementing the VGEE in one of two otherwise identical entry-level atmospheric courses. In addition to comparing student learning and attitudes in the two courses, we are analyzing student participation with the VGEE to evaluate the effectiveness and usability of the VGEE. In particular, we seek to identify the scaffolding students need to construct physically meaningful multi-dimensional visualizations, and evaluate the effectiveness of the visualization-embedded concept-models in addressing inert knowledge. We will also examine the utility of the inquiry strategy in developing content knowledge, process-of-science knowledge, and discipline-specific investigatory skills. Our presentation will include video examples of student use to illustrate our findings.
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.
McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth
2015-07-15
A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.
The role of differential delays in integrating transient visual and proprioceptive information
Cameron, Brendan D.; de la Malla, Cristina; López-Moliner, Joan
2014-01-01
Many actions involve limb movements toward a target. Visual and proprioceptive estimates are available online, and by optimally combining (Ernst and Banks, 2002) both modalities during the movement, the system can increase the precision of the hand estimate. The notion that both sensory modalities are integrated is also motivated by the intuition that we do not consciously perceive any discrepancy between the felt and seen hand's positions. This coherence as a result of integration does not necessarily imply realignment between the two modalities (Smeets et al., 2006). For example, the two estimates (visual and proprioceptive) might be different without either of them (e.g., proprioception) ever being adjusted after recovering the other (e.g., vision). The implication that the felt and seen positions might be different has a temporal analog. Because the actual feedback from the hand at a given instantaneous position reaches brain areas at different times for proprioception and vision (shorter for proprioception), the corresponding instantaneous unisensory position estimates will be different, with the proprioceptive one being ahead of the visual one. Based on the assumption that the system integrates optimally and online the available evidence from both senses, we introduce a temporal mechanism that explains the reported overestimation of hand positions when vision is occluded for active and passive movements (Gritsenko et al., 2007) without the need to resort to initial feedforward estimates (Wolpert et al., 1995). We set up hypotheses to test the validity of the model, and we contrast simulation-based predictions with empirical data. PMID:24550870
Surface reconstruction, figure-ground modulation, and border-ownership.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2013-01-01
The Differentiation-Integration for Surface Completion (DISC) model aims to explain the reconstruction of visual surfaces. We find the model a valuable contribution to our understanding of figure-ground organization. We point out that, next to border-ownership, neurons in visual cortex code whether surface elements belong to a figure or the background and that this is influenced by attention. We furthermore suggest that there must be strong links between object recognition and figure-ground assignment in order to resolve the status of interior contours. Incorporation of these factors in neurocomputational models will further improve our understanding of surface reconstruction, figure-ground organization, and border-ownership.
An egalitarian network model for the emergence of simple and complex cells in visual cortex
Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert
2004-01-01
We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891
Spatial integration in mouse primary visual cortex.
Vaiceliunaite, Agne; Erisken, Sinem; Franzen, Florian; Katzner, Steffen; Busse, Laura
2013-08-01
Responses of many neurons in primary visual cortex (V1) are suppressed by stimuli exceeding the classical receptive field (RF), an important property that might underlie the computation of visual saliency. Traditionally, it has proven difficult to disentangle the underlying neural circuits, including feedforward, horizontal intracortical, and feedback connectivity. Since circuit-level analysis is particularly feasible in the mouse, we asked whether neural signatures of spatial integration in mouse V1 are similar to those of higher-order mammals and investigated the role of parvalbumin-expressing (PV+) inhibitory interneurons. Analogous to what is known from primates and carnivores, we demonstrate that, in awake mice, surround suppression is present in the majority of V1 neurons and is strongest in superficial cortical layers. Anesthesia with isoflurane-urethane, however, profoundly affects spatial integration: it reduces the laminar dependency, decreases overall suppression strength, and alters the temporal dynamics of responses. We show that these effects of brain state can be parsimoniously explained by assuming that anesthesia affects contrast normalization. Hence, the full impact of suppressive influences in mouse V1 cannot be studied under anesthesia with isoflurane-urethane. To assess the neural circuits of spatial integration, we targeted PV+ interneurons using optogenetics. Optogenetic depolarization of PV+ interneurons was associated with increased RF size and decreased suppression in the recorded population, similar to effects of lowering stimulus contrast, suggesting that PV+ interneurons contribute to spatial integration by affecting overall stimulus drive. We conclude that the mouse is a promising model for circuit-level mechanisms of spatial integration, which relies on the combined activity of different types of inhibitory interneurons.
Vision and air flow combine to streamline flying honeybees
Taylor, Gavin J.; Luu, Tien; Ball, David; Srinivasan, Mandyam V.
2013-01-01
Insects face the challenge of integrating multi-sensory information to control their flight. Here we study a ‘streamlining' response in honeybees, whereby honeybees raise their abdomen to reduce drag. We find that this response, which was recently reported to be mediated by optic flow, is also strongly modulated by the presence of air flow simulating a head wind. The Johnston's organs in the antennae were found to play a role in the measurement of the air speed that is used to control the streamlining response. The response to a combination of visual motion and wind is complex and can be explained by a model that incorporates a non-linear combination of the two stimuli. The use of visual and mechanosensory cues increases the strength of the streamlining response when the stimuli are present concurrently. We propose this multisensory integration will make the response more robust to transient disturbances in either modality. PMID:24019053
Chan, Derwin King-Chung; Fung, Ying-Ki; Xing, Suxuan; Hagger, Martin S
2014-06-01
There has been little research examining the psychological antecedents of safety-oriented behavior aimed at reducing myopia risk. This study utilizes self-determination theory (SDT) and the theory of planned behavior (TPB) to understand the role of motivational and social-cognitive factors on individuals' near-work behavior. Adopting a prospective design, undergraduate students (n = 107) completed an initial questionnaire based on SDT in week 1, a second questionnaire containing measures of TPB variables in week 2, and objective measures of reading distance and visual acuity in week 6. The data were analyzed by variance-based structural equation modeling. The results showed that perceived autonomy support and autonomous motivation from SDT significantly predicted attitude, subjective norm, and perceived behavioral control from the TPB. These social-cognitive factors were significantly associated with intention and intention significantly predicted reading distance. The relationships in the model held when controlling for visual acuity. In conclusion, the integrated model of SDT and the TPB may help explain myopia-preventive behaviors.
You prime what you code: The fAIM model of priming of pop-out
Meeter, Martijn
2017-01-01
Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as ‘color’ and for less obvious dimensions such as ‘spikiness’ of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers’ goals—without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search. PMID:29166386
Normalization as a canonical neural computation
Carandini, Matteo; Heeger, David J.
2012-01-01
There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672
Clinical value of the VMI supplemental tests: a modified replication study.
Avi-Itzhak, Tamara; Obler, Doris Richard
2008-10-01
To carry out a modified replication of the study performed by Kulp and Sortor evaluating the clinical value of the information provided by Beery's visual-motor supplemental tests of Visual Perception (VP) and Motor Coordination (MC) in normally developed children. The objectives were to (a) estimate the correlations among the three tests scores; (b) assess the predictive power of the VP and MC scores in explaining the variance in Visual-Motor Integration (VMI) scores; and (c) examine whether poor performance on the VMI is related to poor performance on VP or MC. METHODS.: A convenience sample of 71 children ages 4 and 5 years (M = 4.62 +/- 0.43) participated in the study. The supplemental tests significantly (F = 9.59; dF = 2; p < or = 0. 001) explained 22% of the variance in VMI performance. Only VP was significantly related to VMI (beta = 0.39; T = 3.49) accounting for the total amount of explained variance. Using the study population norms, 11 children (16% of total sample) did poorly on the VMI; of those 11, 73% did poorly on the VP, and none did poorly on the MC. None of these 11 did poorly on both the VP and MC. Nine percent of total sample who did poorly on the VP performed within the norm on the VMI. Thirteen percent who performed poorly on the MC performed within the norm on the VMI. Using the VMI published norms, 14 children (20% of total sample) who did poorly on the VP performed within the norm on the VMI. Forty-eight percent who did poorly on MC performed within the norm on the VMI. Findings supported Kulp and Sortor's conclusions that each area should be individually evaluated during visual-perceptual assessment of children regardless of performance on the VMI.
When apperceptive agnosia is explained by a deficit of primary visual processing.
Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta
2014-03-01
Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
Commonalities between Perception and Cognition.
Tacca, Michela C
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity - a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman's Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition.
Commonalities between Perception and Cognition
Tacca, Michela C.
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity – a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman’s Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition. PMID:22144974
Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity
Beck, Cornelia; Neumann, Heiko
2011-01-01
Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour. PMID:21814543
Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models
2016-01-01
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
Thurman, Steven M; Lu, Hongjing
2014-01-01
Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.
Jacomy, Mathieu; Venturini, Tommaso; Heymann, Sebastien; Bastian, Mathieu
2014-01-01
Gephi is a network visualization software used in various disciplines (social network analysis, biology, genomics...). One of its key features is the ability to display the spatialization process, aiming at transforming the network into a map, and ForceAtlas2 is its default layout algorithm. The latter is developed by the Gephi team as an all-around solution to Gephi users' typical networks (scale-free, 10 to 10,000 nodes). We present here for the first time its functioning and settings. ForceAtlas2 is a force-directed layout close to other algorithms used for network spatialization. We do not claim a theoretical advance but an attempt to integrate different techniques such as the Barnes Hut simulation, degree-dependent repulsive force, and local and global adaptive temperatures. It is designed for the Gephi user experience (it is a continuous algorithm), and we explain which constraints it implies. The algorithm benefits from much feedback and is developed in order to provide many possibilities through its settings. We lay out its complete functioning for the users who need a precise understanding of its behaviour, from the formulas to graphic illustration of the result. We propose a benchmark for our compromise between performance and quality. We also explain why we integrated its various features and discuss our design choices.
Jacomy, Mathieu; Venturini, Tommaso; Heymann, Sebastien; Bastian, Mathieu
2014-01-01
Gephi is a network visualization software used in various disciplines (social network analysis, biology, genomics…). One of its key features is the ability to display the spatialization process, aiming at transforming the network into a map, and ForceAtlas2 is its default layout algorithm. The latter is developed by the Gephi team as an all-around solution to Gephi users’ typical networks (scale-free, 10 to 10,000 nodes). We present here for the first time its functioning and settings. ForceAtlas2 is a force-directed layout close to other algorithms used for network spatialization. We do not claim a theoretical advance but an attempt to integrate different techniques such as the Barnes Hut simulation, degree-dependent repulsive force, and local and global adaptive temperatures. It is designed for the Gephi user experience (it is a continuous algorithm), and we explain which constraints it implies. The algorithm benefits from much feedback and is developed in order to provide many possibilities through its settings. We lay out its complete functioning for the users who need a precise understanding of its behaviour, from the formulas to graphic illustration of the result. We propose a benchmark for our compromise between performance and quality. We also explain why we integrated its various features and discuss our design choices. PMID:24914678
Long-term musical training may improve different forms of visual attention ability.
Rodrigues, Ana Carolina; Loureiro, Maurício Alves; Caramelli, Paulo
2013-08-01
Many studies have suggested that structural and functional cerebral neuroplastic processes result from long-term musical training, which in turn may produce cognitive differences between musicians and non-musicians. We aimed to investigate whether intensive, long-term musical practice is associated with improvements in three different forms of visual attention ability: selective, divided and sustained attention. Musicians from symphony orchestras (n=38) and non-musicians (n=38), who were comparable in age, gender and education, were submitted to three neuropsychological tests, measuring reaction time and accuracy. Musicians showed better performance relative to non-musicians on four variables of the three visual attention tests, and such an advantage could not solely be explained by better sensorimotor integration. Moreover, in the group of musicians, significant correlations were observed between the age at the commencement of musical studies and reaction time in all visual attention tests. The results suggest that musicians present augmented ability in different forms of visual attention, thus illustrating the possible cognitive benefits of long-term musical training. Copyright © 2013 Elsevier Inc. All rights reserved.
Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial
2015-09-01
Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
Fleming Beattie, Julia; Martin, Roy C; Kana, Rajesh K; Deshpande, Hrishikesh; Lee, Seongtaek; Curé, Joel; Ver Hoef, Lawrence
2017-07-01
While the hippocampus has long been identified as a structure integral to memory, the relationship between morphology and function has yet to be fully explained. We present an analysis of hippocampal dentation, a morphological feature previously unexplored in regard to its relationship with episodic memory. "Hippocampal dentation" in this case refers to surface convolutions, primarily present in the CA1/subiculum on the inferior aspect of the hippocampus. Hippocampal dentation was visualized using ultra-high resolution structural MRI and evaluated using a novel visual rating scale. The degree of hippocampal dentation was found to vary considerably across individuals, and was positively associated with verbal memory recall and visual memory recognition in a sample of 22 healthy adults. This study is the first to characterize the variation in hippocampal dentation in a healthy cohort and to demonstrate its association with aspects of episodic memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
The drawing effect: Evidence for reliable and robust memory benefits in free recall.
Wammes, Jeffrey D; Meade, Melissa E; Fernandes, Myra A
2016-01-01
In 7 free-recall experiments, the benefit of creating drawings of to-be-remembered information relative to writing was examined as a mnemonic strategy. In Experiments 1 and 2, participants were presented with a list of words and were asked to either draw or write out each. Drawn words were better recalled than written. Experiments 3-5 showed that the memory boost provided by drawing could not be explained by elaborative encoding (deep level of processing, LoP), visual imagery, or picture superiority, respectively. In Experiment 6, we explored potential limitations of the drawing effect, by reducing encoding time and increasing list length. Drawing, relative to writing, still benefited memory despite these constraints. In Experiment 7, the drawing effect was significant even when encoding trial types were compared in pure lists between participants, inconsistent with a distinctiveness account. Together these experiments indicate that drawing enhances memory relative to writing, across settings, instructions, and alternate encoding strategies, both within- and between-participants, and that a deep LoP, visual imagery, or picture superiority, alone or collectively, are not sufficient to explain the observed effect. We propose that drawing improves memory by encouraging a seamless integration of semantic, visual, and motor aspects of a memory trace.
Karim, A K M Rezaul; Proulx, Michael J; Likova, Lora T
2016-09-01
Orientation bias and directionality bias are two fundamental functional characteristics of the visual system. Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose here a three-stage model of directionality bias in visuospatial functioning. We call this model the 'Perception-Action-Laterality' (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends in perceptual preference: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases - how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases in terms of direction and strength, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Hansen, Adam G.; Beauchamp, David A.
2014-01-01
Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
Alm, Magnus; Behne, Dawn
2015-01-01
Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274
The spatial range of contour integration deficits in schizophrenia
Silverstein, Steven M.; Barch, Deanna M.; Carter, Cameron S.; Gold, James M.; Kovács, Ilona; MacDonald, Angus W.; Ragland, J. Daniel; Strauss, Milton E.
2012-01-01
Contour integration (CI) refers to the process that represents spatially separated elements as a unified edge or closed shape. Schizophrenia is a psychiatric disorder characterized by symptoms such as hallucinations, delusions, disorganized thinking, inappropriate affect, and social withdrawal. Persons with schizophrenia are impaired at CI, but the specific mechanisms underlying the deficit are still not clear. Here, we explored the hypothesis that poor patient performance owes to reduced feedback or impaired longer-range lateral connectivity within early visual cortex—functionally similar to that found in 5- to 6-year old children. This hypothesis predicts that as target element spacing increases from .7 to 1.4° of visual angle, patient impairments will become more pronounced. As a test of the prediction, 25 healthy controls and 36 clinically stable, asymptomatic persons with schizophrenia completed a CI task that involved determining whether a subset of Gabor elements formed a leftward or rightward pointing shape. Adjacent shape elements were spaced at either .7 or 1.4° of visual angle. Difficulty in each spacing condition depended on the number of noise elements present. Patients performed worse than controls overall, both groups performed worse with the larger spacing, and the magnitude of the between-group difference was not amplified at the larger spacing. These results show that CI deficits in schizophrenia cannot be explained in terms of a reduced spatial range of integration, at least not when the shape elements are spaced within 1.5°. Later-developing, low-level integrative mechanisms of lateral connectivity and feedback appear not to be differentially impaired in the illness. PMID:22710617
Factors influencing self-reported vision-related activity limitation in the visually impaired.
Tabrett, Daryl R; Latham, Keziah
2011-07-15
The use of patient-reported outcome (PRO) measures to assess self-reported difficulty in visual activities is common in patients with impaired vision. This study determines the visual and psychosocial factors influencing patients' responses to self-report measures, to aid in understanding what is being measured. One hundred visually impaired participants completed the Activity Inventory (AI), which assesses self-reported, vision-related activity limitation (VRAL) in the task domains of reading, mobility, visual information, and visual motor tasks. Participants also completed clinical tests of visual function (distance visual acuity and near reading performance both with and without low vision aids [LVAs], contrast sensitivity, visual fields, and depth discrimination), and questionnaires assessing depressive symptoms, social support, adjustment to visual loss, and personality. Multiple regression analyses identified that an acuity measure (distance or near), and, to a lesser extent, near reading performance without LVAs, visual fields, and contrast sensitivity best explained self-reported VRAL (28%-50% variance explained). Significant psychosocial correlates were depression and adjustment, explaining an additional 6% to 19% unique variance. Dependent on task domain, the parameters assessed explained 59% to 71% of the variance in self-reported VRAL. Visual function, most notably acuity without LVAs, is the best predictor of self-reported VRAL assessed by the AI. Depression and adjustment to visual loss also significantly influence self-reported VRAL, largely independent of the severity of visual loss and most notably in the less vision-specific tasks. The results suggest that rehabilitation strategies addressing depression and adjustment could improve perceived visual disability.
Karim, A.K.M. Rezaul; Proulx, Michael J.; Likova, Lora T.
2016-01-01
Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose a three-stage model of directionality bias in visuospatial functioning. We call this model the ‘Perception-Action-Laterality’ (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases– how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. PMID:27350096
Crowding, feature integration, and two kinds of "attention".
Põder, Endel
2006-02-21
In a recent article, Pelli, Palomares, and Majaj (2004) suggested that feature binding is mediated by hard-wired integration fields instead of a spotlight of spatial attention (as assumed by Treisman & Gelade, 1980). Consequently, the correct conjoining of visual features can be guaranteed only when there are no other competing features within a circle with a radius of approximately 0.5E (E = eccentricity of the target object). This claim seems contradicted by an observation that we can easily see--for example, the orientation of a single blue bar within a dense array of randomly oriented red bars. In the present study, possible determinants of the extent of crowding (or feature integration) zones were analyzed with feature (color) singletons as targets. It was found that the number of distractors has a dramatic effect on crowding. With a few distractors, a normal crowding effect was observed. However, by increasing the number of distractors, the crowding effect was remarkably reduced. Similar results were observed when the target and distractors were of the same color and when only a differently colored circle indicated the target location. The results can be explained by bottom-up "attention" that facilitates the processing of information from salient locations in the visual field.
Integration of biological networks and gene expression data using Cytoscape
Cline, Melissa S; Smoot, Michael; Cerami, Ethan; Kuchinsky, Allan; Landys, Nerius; Workman, Chris; Christmas, Rowan; Avila-Campilo, Iliana; Creech, Michael; Gross, Benjamin; Hanspers, Kristina; Isserlin, Ruth; Kelley, Ryan; Killcoyne, Sarah; Lotia, Samad; Maere, Steven; Morris, John; Ono, Keiichiro; Pavlovic, Vuk; Pico, Alexander R; Vailaya, Aditya; Wang, Peng-Liang; Adler, Annette; Conklin, Bruce R; Hood, Leroy; Kuiper, Martin; Sander, Chris; Schmulevich, Ilya; Schwikowski, Benno; Warner, Guy J; Ideker, Trey; Bader, Gary D
2013-01-01
Cytoscape is a free software package for visualizing, modeling and analyzing molecular and genetic interaction networks. This protocol explains how to use Cytoscape to analyze the results of mRNA expression profiling, and other functional genomics and proteomics experiments, in the context of an interaction network obtained for genes of interest. Five major steps are described: (i) obtaining a gene or protein network, (ii) displaying the network using layout algorithms, (iii) integrating with gene expression and other functional attributes, (iv) identifying putative complexes and functional modules and (v) identifying enriched Gene Ontology annotations in the network. These steps provide a broad sample of the types of analyses performed by Cytoscape. PMID:17947979
Duiser, Ivonne H F; van der Kamp, John; Ledebt, Annick; Savelsbergh, Geert J P
2014-04-01
We examined whether the three subtests of the Beery Buktenica developmental test of visuomotor integration predicted quality of handwriting across and within groups of boys and girls classified as proficient, at risk or non-proficient writers according to the Concise Assessment Scale for Children's Handwriting. The Beery Buktenica developmental test of visuomotor integration and the Concise Assessment Scale for Children's Handwriting tests were administered to 240 grade 2 children. Proficient writers scored better on the visuomotor integration subtest than non-proficient writers, while proficient and at risk writers scored better than non-proficient writers on the motor coordination subtest. No differences were found on the visual perception subtest. Girls were more often classified as proficient writers than boys, and they scored better on the motor coordination subtest. Across groups, regression indicated that gender and both the visuomotor integration subtest and the motor coordination subtest were significant predictors for the quality of handwriting (i.e., accounted for 17% of the variance). After one year of writing tuition, the visuomotor integration subtest (and to a lesser extent the motor coordination subtest) but not the visual perception subtest significant relates to quality of children's handwriting as measured with the Concise Assessment Scale for Children's Handwriting. However, the relatively little variance explained also points to other abilities and/or task constraints that underlie quality of handwriting. © 2013 Occupational Therapy Australia.
Predictors of visual-motor integration in children with intellectual disability.
Memisevic, Haris; Sinanovic, Osman
2012-12-01
The aim of this study was to assess the influence of sex, age, level and etiology of intellectual disability on visual-motor integration in children with intellectual disability. The sample consisted of 90 children with intellectual disability between 7 and15 years of age. Visual-motor integration was measured using the Acadia test of visual-motor integration. A multiple regression analysis was used for data analysis. The results of this study showed that sex, level of intellectual disability, and age were significant predictors of visual-motor integration. The etiology of intellectual disability did not play a significant role in predicting visual-motor integration. Visual-motor integration skills are very important for a child's overall level of functioning. Individualized programs for the remediation of visual-motor integration skills should be a part of the curriculum for children with intellectual disability.
An integrated reweighting theory of perceptual learning
Dosher, Barbara Anne; Jeter, Pamela; Liu, Jiajuan; Lu, Zhong-Lin
2013-01-01
Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system. PMID:23898204
The role of pulvinar in the transmission of information in the visual hierarchy.
Cortes, Nelson; van Vreeswijk, Carl
2012-01-01
VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.
The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy
Cortes, Nelson; van Vreeswijk, Carl
2012-01-01
Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750
Monkeys and Humans Share a Common Computation for Face/Voice Integration
Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.
2011-01-01
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
Comper, Sandra Mara; Jardim, Anaclara Prada; Corso, Jeana Torres; Gaça, Larissa Botelho; Noffs, Maria Helena Silva; Lancellotti, Carmen Lúcia Penteado; Cavalheiro, Esper Abrão; Centeno, Ricardo Silva; Yacubian, Elza Márcia Targas
2017-10-01
The objective of the study was to analyze preoperative visual and verbal episodic memories in a homogeneous series of patients with mesial temporal lobe epilepsy (MTLE) and unilateral hippocampal sclerosis (HS) submitted to corticoamygdalohippocampectomy and its association with neuronal cell density of each hippocampal subfield. The hippocampi of 72 right-handed patients were collected and prepared for histopathological examination. Hippocampal sclerosis patterns were determined, and neuronal cell density was calculated. Preoperatively, two verbal and two visual memory tests (immediate and delayed recalls) were applied, and patients were divided into two groups, left and right MTLE (36/36). There were no statistical differences between groups regarding demographic and clinical data. Cornu Ammonis 4 (CA4) neuronal density was significantly lower in the right hippocampus compared with the left (p=0.048). The groups with HS presented different memory performance - the right HS were worse in visual memory test [Complex Rey Figure, immediate (p=0.001) and delayed (p=0.009)], but better in one verbal task [RAVLT delayed (p=0.005)]. Multiple regression analysis suggested that the verbal memory performance of the group with left HS was explained by CA1 neuronal density since both tasks were significantly influenced by CA1 [Logical Memory immediate recall (p=0.050) and Logical Memory and RAVLT delayed recalls (p=0.004 and p=0.001, respectively)]. For patients with right HS, both CA1 subfield integrity (p=0.006) and epilepsy duration (p=0.012) explained Complex Rey Figure immediate recall performance. Ultimately, epilepsy duration also explained the performance in the Complex Rey Figure delayed recall (p<0.001). Cornu Ammonis 1 (CA1) hippocampal subfield was related to immediate and delayed recalls of verbal memory tests in left HS, while CA1 and epilepsy duration were associated with visual memory performance in patients with right HS. Copyright © 2017 Elsevier Inc. All rights reserved.
Redundancy reduction explains the expansion of visual direction space around the cardinal axes.
Perrone, John A; Liston, Dorion B
2015-06-01
Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visual communication in the psychoanalytic situation.
Kanzer, M
1980-01-01
The relationship between verbal and visual aspects of the analytic proceedings shows them blended integrally in the experiences of both patient and analyst and in contributing to the insights derived during the treatment. Areas in which the admixture of the verbal and visual occur are delineated. Awareness of the visual aspects gives substance to the operations of empathy, intuition, acting out, working through, etc. Some typical features of visual 'language" are noted and related to the analytic situation. As such they can be translated with the use of logic and consciousness on the analyst's part, not mere random eruptions of intuition. The original significance of dreams as a royal road to the unconscious is confirmed-but we also find in them insights to be derived with higher mental processes. Finally, dyadic aspects of the formation and aims of dreams during analysis are pointed out, with important implications for the analyst's own self-supervision of his techniques and 'real personality" and their effects upon the patient. how remarkable that Dora's dreams, all too belatedly teaching Freud about their transference implications, still have so much more to communicate that derives from his capacity to record faithfully observations he was not yet ready to explain.
[Telemedicine in otorhinolaryngology. Basic principles and possible applications].
Plinkert, P K; Plinkert, B; Zenner, H P
2000-09-01
Telemedicine includes all medical activities involved in diagnosis, therapeutics or social medicine undertaken by an electronic transfer medium. This technique requires the transmission of visual and acoustic information over long distances and does not require the specialist to be personally present at the requested consultation. In the last few years, the digital data transmission, e.g., ISDN (ISDN (Integrated Service Digital Network), has become available and has facilitated the use of telecommunication. Recently, the real-time transmission of acoustic and visual signals will be improved by use of asynchronous transfer mode (ATM). Advanced telecommunication applications in minimally invasive ENT surgery are experimental in most cases. We can distinguish three different telesurgical developments: surgical teleconsultation, surgical teleassistance, and surgical telemanipulation. The different applications and transmission media are explained and discussed.
Ohshiro, Tomokazu; Angelaki, Dora E; DeAngelis, Gregory C
2017-07-19
Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.
Ozaki, Nobuyuki
2002-07-01
This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.
Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie
2017-08-01
Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Thinking graphically: Connecting vision and cognition during graph comprehension.
Ratwani, Raj M; Trafton, J Gregory; Boehm-Davis, Deborah A
2008-03-01
Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. PsycINFO Database Record (c) 2008 APA, all rights reserved
JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.
Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B
2013-03-01
Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.
Data Curation and Visualization for MuSIASEM Analysis of the Nexus
NASA Astrophysics Data System (ADS)
Renner, Ansel
2017-04-01
A novel software-based approach to relational analysis applying recent theoretical advancements of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting framework is presented. This research explores and explains underutilized ways software can assist complex system analysis across the stages of data collection, exploration, analysis and dissemination and in a transparent and collaborative manner. This work is being conducted as part of, and in support of, the four-year European Commission H2020 project: Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (MAGIC). In MAGIC, theoretical advancements to MuSIASEM propose a powerful new approach to spatial-temporal WEFC relational analysis in accordance with a structural-functional scaling mechanism appropriate for biophysically relevant complex system analyses. Software is designed primarily with JavaScript using the Angular2 model-view-controller framework and the Data-Driven Documents (D3) library. These design choices clarify and modularize data flow, simplify research practitioner's work, allow for and assist stakeholder involvement and advance collaboration at all stages. Data requirements and scalable, robust yet light-weight structuring will first be explained. Following, algorithms to process this data will be explored. Data interfaces and data visualization approaches will lastly be presented and described.
Roca, Jose A; Wu, Lihteh; Berrocal, Maria; Rodriguez, Francisco; Alezzandrini, Arturo; Alvira, Gustavo; Velez-Montoya, Raul; Quiroz-Mercado, Hugo; Fernando Arevalo, J; Serrano, Martín; Lima, Luiz H; Figueroa, Marta; Farah, Michel; Chico, Giovanna
2017-01-01
To report the incidence and clinical features of patients that experienced un-explained visual loss following silicone oil (SO) removal. Multicenter retrospective study of patients that underwent SO removal during 2000-2012. Visual loss of ≥2 lines was considered significant. A total of 324 eyes of 324 patients underwent SO removal during the study period. Forty two (13%) eyes suffered a significant visual loss following SO removal. Twenty three (7.1%) of these eyes lost vision secondary to known causes. In the remaining 19 (5.9%) eyes, the loss of vision was not explained by any other pathology. Eleven of these 19 patients (57.9%) were male. The mean age of this group was 49.2 ± 16.4 years. Eyes that had an un-explained visual loss had a mean IOP while the eye was filled with SO of 19.6 ± 6.9 mm Hg. The length of time that the eye was filled with SO was 14.8 ± 4.4 months. In comparison, eyes that did not experience visual loss had a mean IOP of 14 ± 7.3 mm Hg ( p < 0.0002) and a mean tamponade duration of 9.3 ± 10.9 months ( p < 0.0001). An un-explained visual loss after SO removal was observed in 5.9% of eyes. Factors associated with this phenomenon included a higher IOP and longer SO tamponade duration.
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Gilson, Matthieu; Deco, Gustavo; Friston, Karl J; Hagmann, Patric; Mantini, Dante; Betti, Viviana; Romani, Gian Luca; Corbetta, Maurizio
2017-10-09
Our behavior entails a flexible and context-sensitive interplay between brain areas to integrate information according to goal-directed requirements. However, the neural mechanisms governing the entrainment of functionally specialized brain areas remain poorly understood. In particular, the question arises whether observed changes in the regional activity for different cognitive conditions are explained by modifications of the inputs to the brain or its connectivity? We observe that transitions of fMRI activity between areas convey information about the tasks performed by 19 subjects, watching a movie versus a black screen (rest). We use a model-based framework that explains this spatiotemporal functional connectivity pattern by the local variability for 66 cortical regions and the network effective connectivity between them. We find that, among the estimated model parameters, movie viewing affects to a larger extent the local activity, which we interpret as extrinsic changes related to the increased stimulus load. However, detailed changes in the effective connectivity preserve a balance in the propagating activity and select specific pathways such that high-level brain regions integrate visual and auditory information, in particular boosting the communication between the two brain hemispheres. These findings speak to a dynamic coordination underlying the functional integration in the brain. Copyright © 2017. Published by Elsevier Inc.
[Display technologies for augmented reality in medical applications].
Eck, Ulrich; Winkler, Alexander
2018-04-01
One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Selecting and perceiving multiple visual objects
Xu, Yaoda; Chun, Marvin M.
2010-01-01
To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882
Audio-visual integration through the parallel visual pathways.
Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond
2015-10-22
Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Anders, Silke; Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk
2009-11-01
Affective neuroscience has been strongly influenced by the view that a 'feeling' is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients' response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients' phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity.
Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk
2009-01-01
Affective neuroscience has been strongly influenced by the view that a ‘feeling’ is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients’ response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients’ phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity. PMID:19767414
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
Effects of aging on audio-visual speech integration.
Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric
2014-10-01
This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.
KinView: A visual comparative sequence analysis tool for integrated kinome research
McSkimming, Daniel Ian; Dastgheib, Shima; Baffi, Timothy R.; Byrne, Dominic P.; Ferries, Samantha; Scott, Steven Thomas; Newton, Alexandra C.; Eyers, Claire E.; Kochut, Krzysztof J.; Eyers, Patrick A.
2017-01-01
Multiple sequence alignments (MSAs) are a fundamental analysis tool used throughout biology to investigate relationships between protein sequence, structure, function, evolutionary history, and patterns of disease-associated variants. However, their widespread application in systems biology research is currently hindered by the lack of user-friendly tools to simultaneously visualize, manipulate and query the information conceptualized in large sequence alignments, and the challenges in integrating MSAs with multiple orthogonal data such as cancer variants and post-translational modifications, which are often stored in heterogeneous data sources and formats. Here, we present the Multiple Sequence Alignment Ontology (MSAOnt), which represents a profile or consensus alignment in an ontological format. Subsets of the alignment are easily selected through the SPARQL Protocol and RDF Query Language for downstream statistical analysis or visualization. We have also created the Kinome Viewer (KinView), an interactive integrative visualization that places eukaryotic protein kinase cancer variants in the context of natural sequence variation and experimentally determined post-translational modifications, which play central roles in the regulation of cellular signaling pathways. Using KinView, we identified differential phosphorylation patterns between tyrosine and serine/threonine kinases in the activation segment, a major kinase regulatory region that is often mutated in proliferative diseases. We discuss cancer variants that disrupt phosphorylation sites in the activation segment, and show how KinView can be used as a comparative tool to identify differences and similarities in natural variation, cancer variants and post-translational modifications between kinase groups, families and subfamilies. Based on KinView comparisons, we identify and experimentally characterize a regulatory tyrosine (Y177PLK4) in the PLK4 C-terminal activation segment region termed the P+1 loop. To further demonstrate the application of KinView in hypothesis generation and testing, we formulate and validate a hypothesis explaining a novel predicted loss-of-function variant (D523NPKCβ) in the regulatory spine of PKCβ, a recently identified tumor suppressor kinase. KinView provides a novel, extensible interface for performing comparative analyses between subsets of kinases and for integrating multiple types of residue specific annotations in user friendly formats. PMID:27731453
Temporal integration of visual signals in lens compensation (a review)
Zhu, Xiaoying
2013-01-01
Postnatal eye growth is controlled by visual signals. When wearing a positive lens that causes images to be focused in front of the retina (myopic defocus), the eye reduces its rate of ocular elongation and increases choroidal thickness to move the retina forward to meet the focal plane of the eye. When wearing a negative lens that causes images to be focused behind the retina (hyperopic defocus), the opposite happens. This review summarizes how the retina integrates the constantly changing visual signals in a non-linear fashion to guide eye growth in chicks: (1a) When myopic or hyperopic defocus is interrupted by a daily episode of normal vision, normal vision is more effective in reducing myopia caused by hyperopic defocus than in reducing hyperopia caused by myopic defocus; (1b) when the eye experiences alternating myopic and hyperopic defocus, the eye is more sensitive to myopic defocus than to hyperopic defocus and tends to develop hyperopia, even if the duration of hyperopic defocus is much longer than the duration of myopic defocus; (2) when the eye experiences brief, repeated episodes of defocus by wearing either positive or negative lenses, lens compensation depends on the frequency and duration of individual episodes of lens wear, not just the total daily duration of lens wear; and (3) further analysis of the time constants for the hypothesized internal emmetropization signals show that, while it takes approximately the same amount of time for the signals to rise and saturate during lens-wearing episodes, the decline of the signals between episodes depends strongly on the sign of defocus and the ocular component. Although most extensively studied in chicks, the nonlinear temporal integration of visual signals has been found in other animal models. These findings may help explain the complex etiology of myopia in school-aged children and suggest ways to slow down myopia progression. PMID:23470505
Temporal integration of visual signals in lens compensation (a review).
Zhu, Xiaoying
2013-09-01
Postnatal eye growth is controlled by visual signals. When wearing a positive lens that causes images to be focused in front of the retina (myopic defocus), the eye reduces its rate of ocular elongation and increases choroidal thickness to move the retina forward to meet the focal plane of the eye. When wearing a negative lens that causes images to be focused behind the retina (hyperopic defocus), the opposite happens. This review summarizes how the retina integrates the constantly changing visual signals in a non-linear fashion to guide eye growth in chicks: (1a) When myopic or hyperopic defocus is interrupted by a daily episode of normal vision, normal vision is more effective in reducing myopia caused by hyperopic defocus than in reducing hyperopia caused by myopic defocus; (1b) when the eye experiences alternating myopic and hyperopic defocus, the eye is more sensitive to myopic defocus than to hyperopic defocus and tends to develop hyperopia, even if the duration of hyperopic defocus is much longer than the duration of myopic defocus; (2) when the eye experiences brief, repeated episodes of defocus by wearing either positive or negative lenses, lens compensation depends on the frequency and duration of individual episodes of lens wear, not just the total daily duration of lens wear; and (3) further analysis of the time constants for the hypothesized internal emmetropization signals show that, while it takes approximately the same amount of time for the signals to rise and saturate during lens-wearing episodes, the decline of the signals between episodes depends strongly on the sign of defocus and the ocular component. Although most extensively studied in chicks, the nonlinear temporal integration of visual signals has been found in other animal models. These findings may help explain the complex etiology of myopia in school-aged children and suggest ways to slow down myopia progression. Copyright © 2013 Elsevier Ltd. All rights reserved.
Masters, Michael; Bruner, Emiliano; Queer, Sarah; Traynor, Sarah; Senjem, Jess
2015-01-01
Recent research on the visual system has focused on investigating the relationship among eye (ocular), orbital, and visual cortical anatomy in humans. This issue is relevant in evolutionary and medical fields. In terms of evolution, only in modern humans and Neandertals are the orbits positioned beneath the frontal lobes, with consequent structural constraints. In terms of medicine, such constraints can be associated with minor deformation of the eye, vision defects, and patterns of integration among these features, and in association with the frontal lobes, are important to consider in reconstructive surgery. Further study is therefore necessary to establish how these variables are related, and to what extent ocular size is associated with orbital and cerebral cortical volumes. Relationships among these anatomical components were investigated using magnetic resonance images from a large sample of 83 individuals, which also included each subject’s body height, age, sex, and uncorrected visual acuity score. Occipital and frontal gyri volumes were calculated using two different cortical parcellation tools in order to provide a better understanding of how the eye and orbit vary in relation to visual cortical gyri, and frontal cortical gyri which are not directly related to visual processing. Results indicated that ocular and orbital volumes were weakly correlated, and that eye volume explains only a small proportion of the variance in orbital volume. Ocular and orbital volumes were also found to be equally and, in most cases, more highly correlated with five frontal lobe gyri than with occipital lobe gyri associated with V1, V2, and V3 of the visual cortex. Additionally, after accounting for age and sex variation, the relationship between ocular and total visual cortical volume was no longer statistically significant, but remained significantly related to total frontal lobe volume. The relationship between orbital and visual cortical volumes remained significant for a number of occipital lobe gyri even after accounting for these cofactors, but was again found to be more highly correlated with the frontal cortex than with the occipital cortex. These results indicate that eye volume explains only a small amount of variation in orbital and visual cortical volume, and that the eye and orbit are generally more structurally associated with the frontal lobes than they are functionally associated with the visual cortex of the occipital lobes. Results also demonstrate that these components of the visual system are highly complex and influenced by a multitude of factors in humans. PMID:26250048
Nelson, D E; Takahashi, J S
1991-01-01
1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235
Variability in visual working memory ability limits the efficiency of perceptual decision making.
Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T
2014-04-02
The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.
Boomsma, Christine; Pahl, Sabine; Andrade, Jackie
2016-01-01
Climate change and other long-term environmental issues are often perceived as abstract and difficult to imagine. The images a person associates with environmental change, i.e., a person's environmental mental images, can be influenced by the visual information they come across in the public domain. This paper reviews the literature on this topic across social, environmental, and cognitive psychology, and the wider social sciences; thereby responding to a call for more critical investigations into people's responses to visual information. By integrating the literature we come to a better understanding of the lack in vivid and concrete environmental mental imagery reported by the public, the link between environmental mental images and goals, and how affectively charged external images could help in making mental imagery less abstract. Preliminary research reports on the development of a new measure of environmental mental imagery and three tests of the relationship between environmental mental imagery, pro-environmental goals and behavior. Furthermore, the paper provides a program of research, drawing upon approaches from different disciplines, to set out the next steps needed to examine how and why we should encourage the public to imagine environmental change.
Boomsma, Christine; Pahl, Sabine; Andrade, Jackie
2016-01-01
Climate change and other long-term environmental issues are often perceived as abstract and difficult to imagine. The images a person associates with environmental change, i.e., a person’s environmental mental images, can be influenced by the visual information they come across in the public domain. This paper reviews the literature on this topic across social, environmental, and cognitive psychology, and the wider social sciences; thereby responding to a call for more critical investigations into people’s responses to visual information. By integrating the literature we come to a better understanding of the lack in vivid and concrete environmental mental imagery reported by the public, the link between environmental mental images and goals, and how affectively charged external images could help in making mental imagery less abstract. Preliminary research reports on the development of a new measure of environmental mental imagery and three tests of the relationship between environmental mental imagery, pro-environmental goals and behavior. Furthermore, the paper provides a program of research, drawing upon approaches from different disciplines, to set out the next steps needed to examine how and why we should encourage the public to imagine environmental change. PMID:27909415
Cognitive and Developmental Influences in Visual-Motor Integration Skills in Young Children
ERIC Educational Resources Information Center
Decker, Scott L.; Englund, Julia A.; Carboni, Jessica A.; Brooks, Janell H.
2011-01-01
Measures of visual-motor integration skills continue to be widely used in psychological assessments with children. However, the construct validity of many visual-motor integration measures remains unclear. In this study, we investigated the relative contributions of maturation and cognitive skills to the development of visual-motor integration…
Reproduction of auditory and visual standards in monochannel cochlear implant users.
Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna
2004-01-01
The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.
Hockenberry, Marilyn J; Krull, Kevin R; Insel, Kathleen C; Harris, Lynnette L; Gundy, Patricia M; Adkins, Kristin B; Pasvogel, Alice E; Taylor, Olga A; Koerner, Kari M; Montgomery, David W; Ross, Adam K; Hill, Adam; Moore, Ida M
2015-09-01
To examine associations among oxidative stress, fine and visual-motor abilities, and behavioral adjustment in children receiving chemotherapy for acute lymphoblastic leukemia (ALL) . A prospective, repeated-measures design . Two pediatric oncology settings in the southwestern United States. 89 children with ALL were followed from diagnosis to the end of chemotherapy. Serial cerebrospinal fluid samples were collected during scheduled lumbar punctures and analyzed for oxidative stress biomarkers. Children completed fine motor dexterity, visual processing speed, and visual-motor integration measures at three time points. Parents completed child behavior ratings at the same times. Oxidative stress, fine motor dexterity, visual processing, visual-motor integration, and behavioral adjustment . Children with ALL had below-average fine motor dexterity, visual processing speed, and visual-motor integration following the induction phase of ALL therapy. By end of therapy, visual processing speed normalized, and fine motor dexterity and visual-motor integration remained below average. Oxidative stress measures correlated with fine motor dexterity and visual-motor integration. Decreased motor functioning was associated with increased hyperactivity and anxiety . Oxidative stress occurs following chemo-therapy for childhood ALL and is related to impaired fine motor skills and visual symptoms . Early intervention should be considered to prevent fine motor and visual-spatial deficits, as well as behavioral problems.
Buchs, Galit; Maidenbaum, Shachar; Levy-Tzedek, Shelly; Amedi, Amir
2015-01-01
Purpose: To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a ‘visual’ scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information –perceived in parts - into larger percepts despite never having had any visual experience? Methods: We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic’s zooming mechanism. Results: After specialized training of just 6–10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). Conclusions: These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods. PMID:26518671
Effect of auditory presentation of words on color naming: the intermodal Stroop effect.
Shimada, H
1990-06-01
To verify two hypotheses (the automatic parallel-processing model vs the feature integration theory) using the Stroop effect, an intermodal presentation method was introduced. The intermodal presentation (auditory presentation of the distractor word and visual presentation of color patch) separates completely the color and word information. Subjects were required to name the color patch on the CRT and to ignore the auditory color-word in the present experiment. A 5 (stimulus onset asynchronies) x 4 (levels of congruency) analysis of variance with repeated measures was performed on the response times. Two main effects and an interactive effect were significant. The findings indicate that without the presentation of color and word component in the same spatial location the Stroop effect occurs. These results suggest that the feature-integration theory cannot explain the mechanisms underlying the Stroop effect.
Neural dynamics of grouping and segmentation explain properties of visual crowding.
Francis, Gregory; Manassi, Mauro; Herzog, Michael H
2017-07-01
Investigations of visual crowding, where a target is difficult to identify because of flanking elements, has largely used a theoretical perspective based on local interactions where flanking elements pool with or substitute for properties of the target. This successful theoretical approach has motivated a wide variety of empirical investigations to identify mechanisms that cause crowding, and it has suggested practical applications to mitigate crowding effects. However, this theoretical approach has been unable to account for a parallel set of findings that crowding is influenced by long-range perceptual grouping effects. When the target and flankers are perceived as part of separate visual groups, crowding tends to be quite weak. Here, we describe how theoretical mechanisms for grouping and segmentation in cortical neural circuits can account for a wide variety of these long-range grouping effects. Building on previous work, we explain how crowding occurs in the model and explain how grouping in the model involves connected boundary signals that represent a key aspect of visual information. We then introduce new circuits that allow nonspecific top-down selection signals to flow along connected boundaries or within a surface contained by boundaries and thereby induce a segmentation that can separate the visual information corresponding to the flankers from the visual information corresponding to the target. When such segmentation occurs, crowding is shown to be weak. We compare the model's behavior to 5 sets of experimental findings on visual crowding and show that the model does a good job explaining the key empirical findings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Phonological skills, visual attention span, and visual stress in developmental dyslexia.
Saksida, Amanda; Iannuzzi, Stéphanie; Bogliotti, Caroline; Chaix, Yves; Démonet, Jean-François; Bricout, Laure; Billard, Catherine; Nguyen-Morel, Marie-Ange; Le Heuzey, Marie-France; Soares-Boucaud, Isabelle; George, Florence; Ziegler, Johannes C; Ramus, Franck
2016-10-01
In this study, we concurrently investigated 3 possible causes of dyslexia-a phonological deficit, visual stress, and a reduced visual attention span-in a large population of 164 dyslexic and 118 control French children, aged between 8 and 13 years old. We found that most dyslexic children showed a phonological deficit, either in terms of response accuracy (92.1% of the sample), speed (84.8%), or both (79.3%). Deficits in visual attention span, as measured by partial report ability, affected 28.1% of dyslexic participants, all of which also showed a phonological deficit. Visual stress, as measured by subjective reports of visual discomfort, affected 5.5% of dyslexic participants, not more than controls (8.5%). Although phonological variables explained a large amount of variance in literacy skills, visual variables did not explain any additional variance. Finally, children with comorbid phonological and visual deficits did not show more severe reading disability than children with a pure phonological deficit. These results (a) confirm the importance of phonological deficits in dyslexia; (b) suggest that visual attention span may play a role, but a minor one, at least in this population; (c) do not support any involvement of visual stress in dyslexia. Among the factors that may explain some differences with previously published studies, the present sample is characterized by very stringent inclusion criteria, in terms of the severity of reading disability and in terms of exclusion of comorbidities. This may exacerbate the role of phonological deficits to the detriment of other factors playing a role in reading acquisition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Visual White Matter Integrity in Schizophrenia
Butler, Pamela D.; Hoptman, Matthew J.; Nierenberg, Jay; Foxe, John J.; Javitt, Daniel C.; Lim, Kelvin O.
2007-01-01
Objective Patients with schizophrenia have visual-processing deficits. This study examines visual white matter integrity as a potential mechanism for these deficits. Method Diffusion tensor imaging was used to examine white matter integrity at four levels of the visual system in 17 patients with schizophrenia and 21 comparison subjects. The levels examined were the optic radiations, the striate cortex, the inferior parietal lobule, and the fusiform gyrus. Results Schizophrenia patients showed a significant decrease in fractional anisotropy in the optic radiations but not in any other region. Conclusions This finding indicates that white matter integrity is more impaired at initial input, rather than at higher levels of the visual system, and supports the hypothesis that visual-processing deficits occur at the early stages of processing. PMID:17074957
Garaizar, Pablo; Reips, Ulf-Dietrich
2015-09-01
DMDX is a software package for the experimental control and timing of stimulus display for Microsoft Windows systems. DMDX is reliable, flexible, millisecond accurate, and can be downloaded free of charge; therefore it has become very popular among experimental researchers. However, setting up a DMDX-based experiment is burdensome because of its command-based interface. Further, DMDX relies on RTF files in which parts of the stimuli, design, and procedure of an experiment are defined in a complicated (DMASTR-compatible) syntax. Other experiment software, such as E-Prime, Psychopy, and WEXTOR, became successful as a result of integrated visual authoring tools. Such an intuitive interface was lacking for DMDX. We therefore created and present here Visual DMDX (http://visualdmdx.com/), a HTML5-based web interface to set up experiments and export them to DMDX item files format in RTF. Visual DMDX offers most of the features available from the rich DMDX/DMASTR syntax, and it is a useful tool to support researchers who are new to DMDX. Both old and modern versions of DMDX syntax are supported. Further, with Visual DMDX, we go beyond DMDX by having added export to JSON (a versatile web format), easy backup, and a preview option for experiments. In two examples, one experiment each on lexical decision making and affective priming, we explain in a step-by-step fashion how to create experiments using Visual DMDX. We release Visual DMDX under an open-source license to foster collaboration in its continuous improvement.
Predicting Visual Disability in Glaucoma With Combinations of Vision Measures.
Lin, Stephanie; Mihailovic, Aleksandra; West, Sheila K; Johnson, Chris A; Friedman, David S; Kong, Xiangrong; Ramulu, Pradeep Y
2018-04-01
We characterized vision in glaucoma using seven visual measures, with the goals of determining the dimensionality of vision, and how many and which visual measures best model activity limitation. We analyzed cross-sectional data from 150 older adults with glaucoma, collecting seven visual measures: integrated visual field (VF) sensitivity, visual acuity, contrast sensitivity (CS), area under the log CS function, color vision, stereoacuity, and visual acuity with noise. Principal component analysis was used to examine the dimensionality of vision. Multivariable regression models using one, two, or three vision tests (and nonvisual predictors) were compared to determine which was best associated with Rasch-analyzed Glaucoma Quality of Life-15 (GQL-15) person measure scores. The participants had a mean age of 70.2 and IVF sensitivity of 26.6 dB, suggesting mild-to-moderate glaucoma. All seven vision measures loaded similarly onto the first principal component (eigenvectors, 0.220-0.442), which explained 56.9% of the variance in vision scores. In models for GQL scores, the maximum adjusted- R 2 values obtained were 0.263, 0.296, and 0.301 when using one, two, and three vision tests in the models, respectively, though several models in each category had similar adjusted- R 2 values. All three of the best-performing models contained CS. Vision in glaucoma is a multidimensional construct that can be described by several variably-correlated vision measures. Measuring more than two vision tests does not substantially improve models for activity limitation. A sufficient description of disability in glaucoma can be obtained using one to two vision tests, especially VF and CS.
An exploratory study of temporal integration in the peripheral retina of myopes
NASA Astrophysics Data System (ADS)
Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.
2017-08-01
The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.
Common Data Servers as a Foundation for Specialized Services
NASA Astrophysics Data System (ADS)
Burger, E. F.; Schweitzer, R.; O'Brien, K.; Manke, A. B.; Smith, K. M.
2017-12-01
NOAA's Pacific Marine Environmental Laboratory (PMEL) hosts a broad range of research efforts that span many scientific and environmental research disciplines. Many of these research projects have their own data streams that are as diverse as the research. Data are collected using various platforms, including innovative new platforms such as Saildrones and autonomous profilers. With its requirements for public access to federally funded research results and data, the 2013 White House Office of Science and Technology memo on Public Access to Research Results (PARR) changed the data landscape for Federal agencies. In 2015, with support from the PMEL Director, the PMEL Science Data Integration Group (SDIG) initiated a multi-year effort to formulate and implement an integrated data-management strategy for PMEL research efforts. The PMEL integrated data management strategy will provide data access, visualization and some archive services to PMEL data and use existing and proven frameworks for this capability. In addition to these foundational data services, these data access and visualization frameworks are also leveraged to provide enhanced services to scientists. One enhanced service developed is a data management "dashboard". This application provides scientists with a snapshot of their data assets, access to these data, a map view of data locations, and information on the archival status. Ideally, information on the dashboard continually updates to accurately reflect the project's data asset status. This poster explains how frameworks such as ERDDAP and LAS were used as a foundation for the development of custom services, as well as an explanation of the PMEL data management dashboard functionality. We will also highlight accomplishments of the PMEL Integrated data management strategy implementation.
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans
2012-08-01
Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.
2015-01-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502
VizieR Online Data Catalog: M31 PHAT star clusters ages and masses (Fouesneau+, 2014)
NASA Astrophysics Data System (ADS)
Fouesneau, M.; Johnson, L. C.; Weisz, D. R.; Dalcanton, J. J.; Bell, E. F.; Bianchi, L.; Caldwell, N.; Gouliermis, D. A.; Guhathakurta, P.; Kalirai, J.; Larsen, S. S.; Rix, H.-W.; Seth, A. C.; Skillman, E. D.; Williams, B. F.
2017-07-01
For this paper, we use the list of 601 high-probability cluster candidates from the Johnson et al. (2012, J/ApJ/752/95) Year 1 catalog, which contains integrated photometry through six broadband filters from the UV to the near-infrared: F275W (UV), F336W (U), F475W (g), F814W (I), F110W (J), F160W (H). Clusters were detected by eye, primarily based on the F475W images, and visually classified based on their sizes, shapes, and concentrations as explained in Johnson et al. (2012, J/ApJ/752/95). (1 data file).
Implicit integration in a case of integrative visual agnosia.
Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo
2007-05-15
We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Electrophysiological evidence for Audio-visuo-lingual speech integration.
Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc
2018-01-31
Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural correlates of audiovisual integration in music reading.
Nichols, Emily S; Grahn, Jessica A
2016-10-01
Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Using Visual Organizers to Enhance EFL Instruction
ERIC Educational Resources Information Center
Kang, Shumin
2004-01-01
Visual organizers are visual frameworks such as figures, diagrams, charts, etc. used to present structural knowledge spatially in a given area with the intention of enhancing comprehension and learning. Visual organizers are effective in terms of helping to elicit, explain, and communicate information because they can clarify complex concepts into…
ERIC Educational Resources Information Center
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
DOT National Transportation Integrated Search
2009-12-01
The goals of integration should be: Supporting domain oriented data analysis through the use of : knowledge augmented visual analytics system. In this project, we focus on: : Providing interactive data exploration for bridge managements. : ...
ERIC Educational Resources Information Center
Subrahmaniyan, Neeraja; Krishnaswamy, Swetha; Chowriappa, Ashirwad; Srimathveeravalli, Govindarajan; Bisantz, Ann; Shriber, Linda; Kesavadas, Thenkurussi
2012-01-01
Research has shown that children with learning disabilities exhibit considerable challenges with visual motor integration. While there are specialized Occupational Therapy interventions aimed at visual motor integration, computer games and virtual toys have now become increasingly popular, forming an integral part of children's learning and play.…
Oberman, Lindsay M; Ramachandran, Vilayanur S
2008-01-01
Autism is a complex disorder, characterized by social, cognitive, communicative, and motor symptoms. One suggestion, proposed in the current study, to explain the spectrum of symptoms is an underlying impairment in multisensory integration (MSI) systems such as a mirror neuron-like system. The mirror neuron system, thought to play a critical role in skills such as imitation, empathy, and language can be thought of as a multisensory system, converting sensory stimuli into motor representations. Consistent with this, we report preliminary evidence for deficits in a task thought to tap into MSI--"the bouba-kiki task" in children with ASD. The bouba-kiki effect is produced when subjects are asked to pair nonsense shapes with nonsense "words". We found that neurotypical children chose the nonsense "word" whose phonemic structure corresponded with the visual shape of the stimuli 88% of the time. This is presumably because of mirror neuron-like multisensory systems that integrate the visual shape with the corresponding motor gestures used to pronounce the nonsense word. Surprisingly, individuals with ASD only chose the corresponding name 56% of the time. The poor performance by the ASD group on this task suggests a deficit in MSI, perhaps related to impaired MSI brain systems. Though this is a behavioral study, it provides a testable hypothesis for the communication impairments in children with ASD that implicates a specific neural system and fits well with the current findings suggesting an impairment in the mirror systems in individuals with ASD.
Dynamic Shade and Irradiance Simulation of Aquatic ...
Penumbra is a landscape shade and irradiance simulation model that simulates how solar energy spatially and temporally interacts within dynamic ecosystems such as riparian zones, forests, and other terrain that cast topological shadows. Direct and indirect solar energy accumulates across landscapes and is the main energy driver for increasing aquatic and landscape temperatures at both local and holistic scales. Landscape disturbances such as landuse change, clear cutting, and fire can cause significant variations in the resulting irradiance reaching particular locations. Penumbra can simulate solar angles and irradiance at definable temporal grains as low as one minute while simulating landscape shadowing up to an entire year. Landscapes can be represented at sub-meter resolutions with appropriate spatial data inputs, such as field data or elevation and surface object heights derived from light detection and ranging (LiDAR) data. This work describes Penumbra’s framework and methodology, external model integration capability, and appropriate model application for a variety of watershed restoration project types. First, an overview of Penumbra’s framework reveals what this model adds to the existing ecological modeling domain. Second, Penumbra’s stand-alone and integration modes are explained and demonstrated. Stand-alone modeling results are showcased within the 3-D visualization tool VISTAS (VISualizing Terrestrial-Aquatic Systems), which fluently summariz
Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David
2014-01-22
Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.
75 FR 66131 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... Integration and Visualization System,'' JUSTICE/FBI-021, which describes the Data Integration and... Liberties Officer. JUSTICE/FBI-021 SYSTEM NAME: Data Integration and Visualization System. * * * * * SYSTEM...
ERIC Educational Resources Information Center
Li-Tsang, Cecilia W. P.; Wong, Agnes S. K.; Chan, Jackson Y.; Lee, Amos Y. T.; Lam, Miko C. Y.; Wong, C. W.; Lu, Zhonglin
2012-01-01
A previous study found a visual deficit in contour integration in English readers with dyslexia (Simmers & Bex, 2001). Visual contour integration may play an even more significant role in Chinese handwriting particularly due to its logographic presentation (Lam, Au, Leung, & Li-Tsang, 2011). The current study examined the relationship…
Predictors of Visual-Motor Integration in Children with Intellectual Disability
ERIC Educational Resources Information Center
Memisevic, Haris; Sinanovic, Osman
2012-01-01
The aim of this study was to assess the influence of sex, age, level and etiology of intellectual disability on visual-motor integration in children with intellectual disability. The sample consisted of 90 children with intellectual disability between 7 and 15 years of age. Visual-motor integration was measured using the Acadia test of…
A Motor-Skills Programme to Enhance Visual Motor Integration of Selected Pre-School Learners
ERIC Educational Resources Information Center
Africa, Eileen K.; van Deventer, Karel J.
2017-01-01
Pre-schoolers are in a window period for motor skill development. Visual-motor integration (VMI) is the foundation for academic and sport skills. Therefore, it must develop before formal schooling. This study attempted to improve VMI skills. VMI skills were measured with the "Beery-Buktenica developmental test of visual-motor integration 6th…
Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.
Ratan Murty, N Apurva; Arun, S P
2018-04-03
Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Mur, Marieke
2016-01-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. PMID:26493748
Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke
2016-03-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.
Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik
2015-01-01
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.
Alcohol Use in German Adolescents with Visual Impairments and Sighted Adolescents
ERIC Educational Resources Information Center
Pinquart, Martin; Pfeiffer, Jens P.
2010-01-01
Alcohol use was studied in 158 adolescents with visual impairments and 537 sighted adolescents in Germany. The students with visual impairments reported lower levels of alcohol use and drunkenness, and between-group differences increased across adolescence. The lower alcohol use by students with visual impairments was explained, in part, by the…
Jerath, Ravinder; Crawford, Molly W.; Barnes, Vernon A.
2015-01-01
The Global Workspace Theory and Information Integration Theory are two of the most currently accepted consciousness models; however, these models do not address many aspects of conscious experience. We compare these models to our previously proposed consciousness model in which the thalamus fills-in processed sensory information from corticothalamic feedback loops within a proposed 3D default space, resulting in the recreation of the internal and external worlds within the mind. This 3D default space is composed of all cells of the body, which communicate via gap junctions and electrical potentials to create this unified space. We use 3D illustrations to explain how both visual and non-visual sensory information may be filled-in within this dynamic space, creating a unified seamless conscious experience. This neural sensory memory space is likely generated by baseline neural oscillatory activity from the default mode network, other salient networks, brainstem, and reticular activating system. PMID:26379573
Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness
Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.
2017-01-01
Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158
fMRI Analysis-by-Synthesis Reveals a Dorsal Hierarchy That Extracts Surface Slant.
Ban, Hiroshi; Welchman, Andrew E
2015-07-08
The brain's skill in estimating the 3-D orientation of viewed surfaces supports a range of behaviors, from placing an object on a nearby table, to planning the best route when hill walking. This ability relies on integrating depth signals across extensive regions of space that exceed the receptive fields of early sensory neurons. Although hierarchical selection and pooling is central to understanding of the ventral visual pathway, the successive operations in the dorsal stream are poorly understood. Here we use computational modeling of human fMRI signals to probe the computations that extract 3-D surface orientation from binocular disparity. To understand how representations evolve across the hierarchy, we developed an inference approach using a series of generative models to explain the empirical fMRI data in different cortical areas. Specifically, we simulated the responses of candidate visual processing algorithms and tested how well they explained fMRI responses. Thereby we demonstrate a hierarchical refinement of visual representations moving from the representation of edges and figure-ground segmentation (V1, V2) to spatially extensive disparity gradients in V3A. We show that responses in V3A are little affected by low-level image covariates, and have a partial tolerance to the overall depth position. Finally, we show that responses in V3A parallel perceptual judgments of slant. This reveals a relatively short computational hierarchy that captures key information about the 3-D structure of nearby surfaces, and more generally demonstrates an analysis approach that may be of merit in a diverse range of brain imaging domains. Copyright © 2015 Ban and Welchman.
Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY
2018-01-01
A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853
Helmchen, Christoph; Kirchhoff, Jan-Birger; Göttlich, Martin; Sprenger, Andreas
2017-01-01
The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg's ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg's ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia.
Helmchen, Christoph; Kirchhoff, Jan-Birger; Göttlich, Martin; Sprenger, Andreas
2017-01-01
Background The cerebellum integrates proprioceptive, vestibular and visual signals for postural control. Cerebellar patients with downbeat nystagmus (DBN) complain of unsteadiness of stance and gait as well as blurred vision and oscillopsia. Objectives The aim of this study was to elucidate the differential role of visual input, gaze eccentricity, vestibular and proprioceptive input on the postural stability in a large cohort of cerebellar patients with DBN, in comparison to healthy age-matched control subjects. Methods Oculomotor (nystagmus, smooth pursuit eye movements) and postural (postural sway speed) parameters were recorded and related to each other and volumetric changes of the cerebellum (voxel-based morphometry, SPM). Results Twenty-seven patients showed larger postural instability in all experimental conditions. Postural sway increased with nystagmus in the eyes closed condition but not with the eyes open. Romberg’s ratio remained stable and was not different from healthy controls. Postural sway did not change with gaze position or graviceptive input. It increased with attenuated proprioceptive input and on tandem stance in both groups but Romberg’s ratio also did not differ. Cerebellar atrophy (vermal lobule VI, VIII) correlated with the severity of impaired smooth pursuit eye movements of DBN patients. Conclusions Postural ataxia of cerebellar patients with DBN cannot be explained by impaired visual feedback. Despite oscillopsia visual feedback control on cerebellar postural control seems to be preserved as postural sway was strongest on visual deprivation. The increase in postural ataxia is neither related to modulations of single components characterizing nystagmus nor to deprivation of single sensory (visual, proprioceptive) inputs usually stabilizing stance. Re-weighting of multisensory signals and/or inappropriate cerebellar motor commands might account for this postural ataxia. PMID:28056109
DOT National Transportation Integrated Search
2012-03-01
This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...
Visual and haptic integration in the estimation of softness of deformable objects
Cellini, Cristiano; Kaim, Lukas; Drewing, Knut
2013-01-01
Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510
An Evaluation of Wellness Assessment Visualizations for Older Adults
Reeder, Blaine; Yoo, Daisy; Aziz, Rafae; Thompson, Hilaire J.; Demiris, George
2015-01-01
Abstract Background Smart home technologies provide a valuable resource to unobtrusively monitor health and wellness within an older adult population. However, the breadth and density of data available along with aging associated decreases in working memory, prospective memory, spatial cognition, and processing speed can make it challenging to comprehend for older adults. We developed visualizations of smart home health data integrated into a framework of wellness. We evaluated the visualizations through focus groups with older adults and identified recommendations to guide the future development of visualizations. Materials and Methods We conducted four focus groups with older adult participants (n=31) at an independent retirement community. Participants were presented with three different visualizations from a wellness pilot study. A qualitative descriptive analysis was conducted to identify thematic content. Results We identified three themes related to processing and application of visualizations: (1) values of visualizations for wellness assessment, (2) cognitive processing approaches to visualizations, and (3) integration of health data for visualization. In addition, the focus groups highlighted key design considerations of visualizations important towards supporting decision-making and evaluation assessments within integrated health displays. Conclusions Participants found inherent value in having visualizations available to proactively engage with their healthcare provider. Integrating the visualizations into a wellness framework helped reduce the complexity of raw smart home data. There has been limited work on health visualizations from a consumer perspective, in particular for an older adult population. Creating appropriately designed visualizations is valuable towards promoting consumer involvement within the shared decision-making process of care. PMID:25401414
Wilbiks, Jonathan M P; Dyson, Benjamin J
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.
Wilbiks, Jonathan M. P.; Dyson, Benjamin J.
2016-01-01
Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790
Visual Literacy and Message Design
ERIC Educational Resources Information Center
Pettersson, Rune
2009-01-01
Many researchers from different disciplines have explained their views and interpretations and written about visual literacy from their various perspectives. Visual literacy may be applied in almost all areas such as advertising, anatomy, art, biology, business presentations, communication, education, engineering, etc. (Pettersson, 2002a). Despite…
Geldof, Christiaan J A; van Hus, Janeline W P; Jeukens-Visser, Martine; Nollet, Frans; Kok, Joke H; Oosterlaan, Jaap; van Wassenaer-Leemhuis, Aleid G
2016-01-01
To extend understanding of impaired motor functioning of very preterm (VP)/very low birth weight (VLBW) children by investigating its relationship with visual attention, visual and visual-motor functioning. Motor functioning (Movement Assessment Battery for Children, MABC-2; Manual Dexterity, Aiming & Catching, and Balance component), as well as visual attention (attention network and visual search tests), vision (oculomotor, visual sensory and perceptive functioning), visual-motor integration (Beery Visual Motor Integration), and neurological status (Touwen examination) were comprehensively assessed in a sample of 106 5.5-year-old VP/VLBW children. Stepwise linear regression analyses were conducted to investigate multivariate associations between deficits in visual attention, oculomotor, visual sensory, perceptive and visual-motor integration functioning, abnormal neurological status, neonatal risk factors, and MABC-2 scores. Abnormal MABC-2 Total or component scores occurred in 23-36% of VP/VLBW children. Visual and visual-motor functioning accounted for 9-11% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Visual perceptive deficits only were associated with Aiming & Catching. Abnormal neurological status accounted for an additional 19-30% of variance in MABC-2 Total, Manual Dexterity and Balance scores, and 5% of variance in Aiming & Catching, and neonatal risk factors for 3-6% of variance in MABC-2 Total, Manual Dexterity and Balance scores. Motor functioning is weakly associated with visual and visual-motor integration deficits and moderately associated with abnormal neurological status, indicating that motor performance reflects long term vulnerability following very preterm birth, and that visual deficits are of minor importance in understanding motor functioning of VP/VLBW children. Copyright © 2016 Elsevier Ltd. All rights reserved.
A model of face selection in viewing video stories.
Suda, Yuki; Kitazawa, Shigeru
2015-01-19
When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the "peak" face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.
How Documentalists Update SIMBAD
NASA Astrophysics Data System (ADS)
Buga, M.; Bot, C.; Brouty, M.; Bruneau, C.; Brunet, C.; Cambresy, L.; Eisele, A.; Genova, F.; Lesteven, S.; Loup, C.; Neuville, M.; Oberto, A.; Ochsenbein, F.; Perret, E.; Siebert, A.; Son, E.; Vannier, P.; Vollmer, B.; Vonflie, P.; Wenger, M.; Woelfel, F.
2015-04-01
The Strasbourg astronomical Data Center (CDS) was created in 1972 and has had a major role in astronomy for more than forty years. CDS develops a service called SIMBAD that provides basic data, cross-identifications, bibliography, and measurements for astronomical objects outside the solar system. It brings to the scientific community an added value to content which is updated daily by a team of documentalists working together in close collaboration with astronomers and IT specialists. We explain how the CDS staff updates SIMBAD with object citations in the main astronomical journals, as well as with astronomical data and measurements. We also explain how the identification is made between the objects found in the literature and those already existing in SIMBAD. We show the steps followed by the documentalist team to update the database using different tools developed at CDS, like the sky visualizer Aladin, and the large catalogues and survey database VizieR. As a direct result of this teamwork, SIMBAD integrates almost 10.000 bibliographic references per year. The service receives more than 400.000 queries per day.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Masking reduces orientation selectivity in rat visual cortex
Alwis, Dasuni S.; Richards, Katrina L.
2016-01-01
In visual masking the perception of a target stimulus is impaired by a preceding (forward) or succeeding (backward) mask stimulus. The illusion is of interest because it allows uncoupling of the physical stimulus, its neuronal representation, and its perception. To understand the neuronal correlates of masking, we examined how masks affected the neuronal responses to oriented target stimuli in the primary visual cortex (V1) of anesthetized rats (n = 37). Target stimuli were circular gratings with 12 orientations; mask stimuli were plaids created as a binarized sum of all possible target orientations. Spatially, masks were presented either overlapping or surrounding the target. Temporally, targets and masks were presented for 33 ms, but the stimulus onset asynchrony (SOA) of their relative appearance was varied. For the first time, we examine how spatially overlapping and center-surround masking affect orientation discriminability (rather than visibility) in V1. Regardless of the spatial or temporal arrangement of stimuli, the greatest reductions in firing rate and orientation selectivity occurred for the shortest SOAs. Interestingly, analyses conducted separately for transient and sustained target response components showed that changes in orientation selectivity do not always coincide with changes in firing rate. Given the near-instantaneous reductions observed in orientation selectivity even when target and mask do not spatially overlap, we suggest that monotonic visual masking is explained by a combination of neural integration and lateral inhibition. PMID:27535373
Visual, Motor, and Visual-Motor Integration Difficulties in Students with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Oliver, Kimberly
2013-01-01
Autism spectrum disorders (ASDs) affect 1 in every 88 U.S. children. ASDs have been described as neurological and developmental disorders impacting visual, motor, and visual-motor integration (VMI) abilities that affect academic achievement (CDC, 2010). Forty-five participants (22 ASD and 23 Typically Developing [TD]) 8 to 14 years old completed…
A model of the temporal dynamics of multisensory enhancement
Rowland, Benjamin A.; Stein, Barry E.
2014-01-01
The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual–auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear “super-additive” computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation. PMID:24374382
Visual perceptual abilities of Chinese-speaking and English-speaking children.
Lai, Mun Yee; Leung, Frederick Koon Shing
2012-04-01
This paper reports an investigation of Chinese-speaking and English-speaking children's general visual perceptual abilities. The Developmental Test of Visual Perception was administered to 41 native Chinese-speaking children of mean age 5 yr. 4 mo. in Hong Kong and 35 English-speaking children of mean age 5 yr. 2 mo. in Melbourne. Of interest were the two interrelated components of visual perceptual abilities, namely, motor-reduced visual perceptual and visual-motor integration perceptual abilities, which require either verbal or motoric responses in completing visual tasks. Chinese-speaking children significantly outperformed the English-speaking children on general visual perceptual abilities. When comparing the results of each of the two different components, the Chinese-speaking students' performance on visual-motor integration was far better than that of their counterparts (ES = 2.70), while the two groups of students performed similarly on motor-reduced visual perceptual abilities. Cultural factors such as written language format may be contributing to the enhanced performance of Chinese-speaking children's visual-motor integration abilities, but there may be validity questions in the Chinese version.
Late maturation of visual spatial integration in humans
Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György
1999-01-01
Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600
Rosemann, Stephanie; Thiel, Christiane M
2018-07-15
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.
Change in vision, visual disability, and health after cataract surgery.
Helbostad, Jorunn L; Oedegaard, Maria; Lamb, Sarah E; Delbaere, Kim; Lord, Stephen R; Sletvold, Olav
2013-04-01
Cataract surgery improves vision and visual functioning; the effect on general health is not established. We investigated if vision, visual functioning, and general health follow the same trajectory of change the year after cataract surgery and if changes in vision explain changes in visual disability and general health. One-hundred forty-eight persons, with a mean (SD) age of 78.9 (5.0) years (70% bilateral surgery), were assessed before and 6 weeks and 12 months after surgery. Visual disability and general health were assessed by the CatQuest-9SF and the Short Formular-36. Corrected binocular visual acuity, visual field, stereo acuity, and contrast vision improved (P < 0.001) from before to 6 weeks after surgery, with further improvements of visual acuity evident up to 12 months (P = 0.034). Cataract surgery had an effect on visual disability 1 year later (P < 0.001). Physical and mental health improved after surgery (P < 0.01) but had returned to presurgery level after 12 months. Vision changes did not explain visual disability and general health 6 weeks after surgery. Vision improved and visual disability decreased in the year after surgery, whereas changes in general health and visual functioning were short-term effects. Lack of associations between changes in vision and self-reported disability and general health suggests that the degree of vision changes and self-reported health do not have a linear relationship.
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.
2016-01-01
Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688
ERIC Educational Resources Information Center
Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.
2010-01-01
This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…
ERIC Educational Resources Information Center
Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.
2012-01-01
A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…
Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M
2006-10-25
The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.
Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.
2009-01-01
The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454
Optimal visual-haptic integration with articulated tools.
Takahashi, Chie; Watt, Simon J
2017-05-01
When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.
Can theories of visual representation help to explain asymmetries in amygdala function?
McMenamin, Brenton W; Marsolek, Chad J
2013-06-01
Emotional processing differs between the left and right hemispheres of the brain, and functional differences have been reported more specifically between the left and right amygdalae, subcortical structures heavily implicated in emotional processing. However, the empirical pattern of amygdalar asymmetries is inconsistent with extant theories of emotional asymmetries. Here we review this discrepancy, and we hypothesize that hemispheric differences in visual object processing help to explain the previously reported functional differences between the left and right amygdalae. The implication that perceptual factors play a large role in determining amygdalar asymmetries may help to explain amygdalar dysfunction in the development and maintenance of posttraumatic stress disorder.
Kim, Eun Hwi; Suh, Soon Rim
2017-06-01
This study was conducted to verify the effects of a memory and visual-motor integration program for older adults based on self-efficacy theory. A non-equivalent control group pretest-posttest design was implemented in this quasi-experimental study. The participants were 62 older adults from senior centers and older adult welfare facilities in D and G city (Experimental group=30, Control group=32). The experimental group took part in a 12-session memory and visual-motor integration program over 6 weeks. Data regarding memory self-efficacy, memory, visual-motor integration, and depression were collected from July to October of 2014 and analyzed with independent t-test and Mann-Whitney U test using PASW Statistics (SPSS) 18.0 to determine the effects of the interventions. Memory self-efficacy (t=2.20, p=.031), memory (Z=-2.92, p=.004), and visual-motor integration (Z=-2.49, p=.013) increased significantly in the experimental group as compared to the control group. However, depression (Z=-0.90, p=.367) did not decrease significantly. This program is effective for increasing memory, visual-motor integration, and memory self-efficacy in older adults. Therefore, it can be used to improve cognition and prevent dementia in older adults. © 2017 Korean Society of Nursing Science
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Virtual reality applications in improving postural control and minimizing falls.
Virk, Sumandeep; McConville, Kristiina M Valter
2006-01-01
Maintaining balance under all conditions is an absolute requirement for humans. Orientation in space and balance maintenance requires inputs from the vestibular, the visual, the proprioceptive and the somatosensory systems. All the cues coming from these systems are integrated by the central nervous system (CNS) to employ different strategies for orientation and balance. How the CNS integrates all the inputs and makes cognitive decisions about balance strategies has been an area of interest for biomedical engineers for a long time. More interesting is the fact that in the absence of one or more cues, or when the input from one of the sensors is skewed, the CNS "adapts" to the new environment and gives less weight to the conflicting inputs [1]. The focus of this paper is a review of different strategies and models put forward by researchers to explain the integration of these sensory cues. Also, the paper compares the different approaches used by young and old adults in maintaining balance. Since with age the musculoskeletal, visual and vestibular system deteriorates, the older subjects have to compensate for these impaired sensory cues for postural stability. The paper also discusses the applications of virtual reality in rehabilitation programs not only for balance in the elderly but also in occupational falls. Virtual reality has profound applications in the field of balance rehabilitation and training because of its relatively low cost. Studies will be conducted to evaluate the effectiveness of virtual reality training in modifying the head and eye movement strategies, and determine the role of these responses in the maintenance of balance.
Modeling Spatial and Temporal Aspects of Visual Backward Masking
ERIC Educational Resources Information Center
Hermens, Frouke; Luksys, Gediminas; Gerstner, Wulfram; Herzog, Michael H.; Ernst, Udo
2008-01-01
Visual backward masking is a versatile tool for understanding principles and limitations of visual information processing in the human brain. However, the mechanisms underlying masking are still poorly understood. In the current contribution, the authors show that a structurally simple mathematical model can explain many spatial and temporal…
ERIC Educational Resources Information Center
Ozturk, Elif; Erden, Feyza Tantekin
2011-01-01
This study investigates preschool teachers' beliefs about integrated curriculum and, more specifically, their beliefs about integration of visual arts with other activities. The participants of this study consisted of 255 female preschool teachers who are employed in preschools in Ankara, Turkey. For the study, teachers were asked to complete…
The attentive brain: insights from developmental cognitive neuroscience.
Amso, Dima; Scerif, Gaia
2015-10-01
Visual attention functions as a filter to select environmental information for learning and memory, making it the first step in the eventual cascade of thought and action systems. Here, we review studies of typical and atypical visual attention development and explain how they offer insights into the mechanisms of adult visual attention. We detail interactions between visual processing and visual attention, as well as the contribution of visual attention to memory. Finally, we discuss genetic mechanisms underlying attention disorders and how attention may be modified by training.
ERIC Educational Resources Information Center
Tse, Linda F. L.; Siu, Andrew M. H.; Li-Tsang, Cecilia W. P.
2017-01-01
Visual-motor integration (VMI) is the ability to coordinate visual perception and motor skills. Although Chinese children have superior performance in VMI than U.S. norms, there is limited information regarding the performance of its basic composition of VMI in regard to visual and motor aspects. This study aimed to examine the differences in…
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach
Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik
2015-01-01
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019
Visual feature integration with an attention deficit.
Arguin, M; Cavanagh, P; Joanette, Y
1994-01-01
Treisman's feature integration theory proposes that the perception of illusory conjunctions of correctly encoded visual features is due to the failure of an attentional process. This hypothesis was examined by studying brain-damaged subjects who had previously been shown to have difficulty in attending to contralesional stimulation. These subjects exhibited a massive feature integration deficit for contralesional stimulation relative to ipsilesional displays. In contrast, both normal age-matched controls and brain-damaged subjects who did not exhibit any evidence of an attention deficit showed comparable feature integration performance with left- and right-hemifield stimulation. These observations indicate the crucial function of attention for visual feature integration in normal perception.
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Cognitive penetrability and emotion recognition in human facial expressions
Marchi, Francesco
2015-01-01
Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796
jAMVLE, a New Integrated Molecular Visualization Learning Environment
ERIC Educational Resources Information Center
Bottomley, Steven; Chandler, David; Morgan, Eleanor; Helmerhorst, Erik
2006-01-01
A new computer-based molecular visualization tool has been developed for teaching, and learning, molecular structure. This java-based jmol Amalgamated Molecular Visualization Learning Environment (jAMVLE) is platform-independent, integrated, and interactive. It has an overall graphical user interface that is intuitive and easy to use. The…
Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit
ERIC Educational Resources Information Center
Chiu, Jennifer L.; Linn, Marcia C.
2014-01-01
This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent…
Developing Visual Creative Literacies through Integrating Art-Based Inquiry
ERIC Educational Resources Information Center
Smilan, Cathy
2016-01-01
Visual literacy and the ability to think creatively are critical skills requisite to full participation and communication in the twenty-first century. Learning experiences that integrate studio-based inquiry and other academic concepts can develop discipline skills as well as communication skills of deciphering visual cues and de/re-constructing…
Slipped Lips: Onset Asynchrony Detection of Auditory-Visual Language in Autism
ERIC Educational Resources Information Center
Grossman, Ruth B.; Schneps, Matthew H.; Tager-Flusberg, Helen
2009-01-01
Background: It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals…
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation
2015-03-01
SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new
Memel, Molly; Ryan, Lee
2017-06-01
The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was observed during the Combined condition. Older adults showed less overall activation in MTL regions compared to young adults, and associative memory performance was most strongly predicted by prefrontal, rather than MTL, activation. We suggest that visual integration benefits both young and older adults similarly, and provides a special case of unitization that may be mediated by recollective, rather than familiarity-based encoding processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual recovery in cortical blindness is limited by high internal noise
Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.
2015-01-01
Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544
ERIC Educational Resources Information Center
Meyer, Philip A.
Reported were four experiments which investigated the developmental and mental retardation aspects of an initial stage of visual information processing termed iconic memory. The stage was explained to involve processing visual stimuli prior to sensation and through to recognition. In three of the four experiments, the paradigm of visual masking…
Visual Analytics and Storytelling through Video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.
2005-10-31
This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.
Integrating visualization and interaction research to improve scientific workflows.
Keefe, Daniel F
2010-01-01
Scientific-visualization research is, nearly by necessity, interdisciplinary. In addition to their collaborators in application domains (for example, cell biology), researchers regularly build on close ties with disciplines related to visualization, such as graphics, human-computer interaction, and cognitive science. One of these ties is the connection between visualization and interaction research. This isn't a new direction for scientific visualization (see the "Early Connections" sidebar). However, momentum recently seems to be increasing toward integrating visualization research (for example, effective visual presentation of data) with interaction research (for example, innovative interactive techniques that facilitate manipulating and exploring data). We see evidence of this trend in several places, including the visualization literature and conferences.
Motion-based prediction explains the role of tracking in motion extrapolation.
Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U
2013-11-01
During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex
Freedman, David J.
2014-01-01
Summary The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventral pathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middle temporal area. Such representations often need to be integrated perceptually to solve tasks which depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctions of color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are task relevant, and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. PMID:25199703
Who Is the Visually Impaired Child? MAVIS Sourcebook 1.
ERIC Educational Resources Information Center
Efron, Marvin
This booklet is the first in a series of six sourcebooks produced by Project MAVIS (Materials Adaptations for Visually Impaired Students in the Social Studies). Designed primarily for teachers, the booklet defines visual impairment and its causes, explains how to interpret a student's eye report, and reviews aspects of the overall educational plan…
Coping with a Visual Impairment through Self-Investigation.
ERIC Educational Resources Information Center
van Huijgevoort, Toos
2002-01-01
This article describes how the self-confrontation method (SCM) can help people cope with visual impairment. The actual impact of the impairment can be studied by establishing how being visually impaired is expressed in self-narratives. The different phases of SCM are explained and two case studies are presented. (Contains references.) (Author/CR)
Computer-Based Tutoring of Visual Concepts: From Novice to Experts.
ERIC Educational Resources Information Center
Sharples, Mike
1991-01-01
Description of ways in which computers might be used to teach visual concepts discusses hypermedia systems; describes computer-generated tutorials; explains the use of computers to create learning aids such as concept maps, feature spaces, and structural models; and gives examples of visual concept teaching in medical education. (10 references)…
Social Studies for the Visually Impaired Child. MAVIS Sourcebook 4.
ERIC Educational Resources Information Center
Singleton, Laurel R.
Suggestions are made in this sourcebook for adapting teaching strategies and curriculum materials in social studies to accomodate the needs of the visually impaired (VI) student. It is presented in eight chapters. Chapter one explains why elementary grade social studies, with its emphasis on visual media, presents difficulties for VI children.…
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Function of Consciousness in Multisensory Integration
ERIC Educational Resources Information Center
Palmer, Terry D.; Ramsey, Ashley K.
2012-01-01
The function of consciousness was explored in two contexts of audio-visual speech, cross-modal visual attention guidance and McGurk cross-modal integration. Experiments 1, 2, and 3 utilized a novel cueing paradigm in which two different flash suppressed lip-streams cooccured with speech sounds matching one of these streams. A visual target was…
ERIC Educational Resources Information Center
Haegele, Justin A.; Zhu, Xihe
2017-01-01
Purpose: The purpose of this retrospective study was to examine the experiences of adults with visual impairments during school-based integrated physical education (PE). Method: An interpretative phenomenological analysis (IPA) research approach was used and 16 adults (ages 21-48 years; 10 women, 6 men) with visual impairments acted as…
Exploring the Integration of Data Mining and Data Visualization
ERIC Educational Resources Information Center
Zhang, Yi
2011-01-01
Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…
ERIC Educational Resources Information Center
Cunnington, Marisol; Kantrowitz, Andrea; Harnett, Susanne; Hill-Ries, Aline
2014-01-01
The "Framing Student Success: Connecting Rigorous Visual Arts, Math and Literacy Learning" experimental demonstration project was designed to develop and test an instructional program integrating high-quality, standards-based instruction in the visual arts, math, and literacy. Developed and implemented by arts-in-education organization…
75 FR 53342 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
...), Federal Bureau of Investigation (FBI), proposes to establish a new system of records, the Data Integration and Visualization System, JUSTICE/FBI-021, to support and enhance data search, integration... to develop more efficient methods to analyze FBI data. The Data Integration and Visualization System...
75 FR 53262 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... a new Privacy Act system of records, JUSTICE/FBI- 021, the Data Integration and Visualization System... Act system of records, the Data Integration and Visualization System (DIVS), Justice/FBI-021. The... investigative mission by enabling access, search, integration, and analytics across multiple existing databases...
Drijvers, Linda; Özyürek, Asli; Jensen, Ole
2018-06-19
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
A neurophysiological explanation for biases in visual localization.
Moreland, James C; Boynton, Geoffrey M
2017-02-01
Observers show small but systematic deviations from equal weighting of all elements when asked to localize the center of an array of dots. Counter-intuitively, with small numbers of dots drawn from a Gaussian distribution, this bias results in subjects overweighting the influence of outlier dots - inconsistent with traditional statistical estimators of central tendency. Here we show that this apparent statistical anomaly can be explained by the observation that outlier dots also lie in regions of lower dot density. Using a standard model of V1 processing, which includes spatial integration followed by a compressive static nonlinearity, we can successfully predict the finding that dots in less dense regions of an array have a relatively greater influence on the perceived center.
Holmes, Susan; Alekseyenko, Alexander; Timme, Alden; Nelson, Tyrrell; Pasricha, Pankaj Jay; Spormann, Alfred
2011-01-01
This article explains the statistical and computational methodology used to analyze species abundances collected using the LNBL Phylochip in a study of Irritable Bowel Syndrome (IBS) in rats. Some tools already available for the analysis of ordinary microarray data are useful in this type of statistical analysis. For instance in correcting for multiple testing we use Family Wise Error rate control and step-down tests (available in the multtest package). Once the most significant species are chosen we use the hypergeometric tests familiar for testing GO categories to test specific phyla and families. We provide examples of normalization, multivariate projections, batch effect detection and integration of phylogenetic covariation, as well as tree equalization and robustification methods.
Arora, Karun S; Boland, Michael V; Friedman, David S; Jefferys, Joan L; West, Sheila K; Ramulu, Pradeep Y
2013-12-01
To determine the extent of difference between better-eye visual field (VF) mean deviation (MD) and integrated VF (IVF) MD among Salisbury Eye Evaluation (SEE) subjects and a larger group of glaucoma clinic subjects and to assess how those measures relate to objective and subjective measures of ability/performance in SEE subjects. Retrospective analysis of population- and clinic-based samples of adults. A total of 490 SEE and 7053 glaucoma clinic subjects with VF loss (MD ≤-3 decibels [dB] in at least 1 eye). Visual field testing was performed in each eye, and IVF MD was calculated. Differences between better-eye and IVF MD were calculated for SEE and clinic-based subjects. In SEE subjects with VF loss, models were constructed to compare the relative impact of better-eye and IVF MD on driving habits, mobility, self-reported vision-related function, and reading speed. Difference between better-eye and IVF MD and relationship of better-eye and IVF MD with performance measures. The median difference between better-eye and IVF MD was 0.41 dB (interquartile range [IQR], -0.21 to 1.04 dB) and 0.72 dB (IQR, 0.04-1.45 dB) for SEE subjects and clinic-based patients with glaucoma, respectively, with differences of ≥ 2 dB between the 2 MDs observed in 9% and 18% of the groups, respectively. Among SEE subjects with VF loss, both MDs demonstrated similar associations with multiple ability and performance metrics as judged by the presence/absence of a statistically significant association between the MD and the metric, the magnitude of observed associations (odds ratios, rate ratios, or regression coefficients associated with 5-dB decrements in MD), and the extent of variability in the metric explained by the model (R(2)). Similar associations of similar magnitude also were noted for the subgroup of subjects with glaucoma and subjects in whom better-eye and IVF MD differed by ≥ 2 dB. The IVF MD rarely differs from better-eye MD, and similar associations between VF loss and visual disability are obtained using either MD. Unlike better-eye MD, IVF measurements require extra software/calculation. As such, information from studies using better-eye MD can be more easily integrated into clinical decision-making, making better-eye MD a robust and meaningful method for reporting VF loss severity. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Explaining neural signals in human visual cortex with an associative learning model.
Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias
2012-08-01
"Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.
A Hierarchical Visualization Analysis Model of Power Big Data
NASA Astrophysics Data System (ADS)
Li, Yongjie; Wang, Zheng; Hao, Yang
2018-01-01
Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.
The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.
Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard
2016-10-01
The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Duong, Tuan A.; Duong, Nghi; Le, Duong
2017-01-01
In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.
NASA Astrophysics Data System (ADS)
Yu, K. C.; Champlin, D. M.; Goldsworth, D. A.; Raynolds, R. G.; Dechesne, M.
2011-09-01
Digital Earth visualization technologies, from ArcGIS to Google Earth, have allowed for the integration of complex, disparate data sets to produce visually rich and compelling three-dimensional models of sub-surface and surface resource distribution patterns. The rendering of these models allows the public to quickly understand complicated geospatial relationships that would otherwise take much longer to explain using traditional media. At the Denver Museum of Nature & Science (DMNS), we have used such visualization technologies, including real-time virtual reality software running in the immersive digital "fulldome" Gates Planetarium, to impact the community through topical policy presentations. DMNS public lectures have covered regional issues like water resources, as well as global topics such as earthquakes, tsunamis, and resource depletion. The Gates Planetarium allows an audience to have an immersive experience-similar to virtual reality "CAVE" environments found in academia-that would otherwise not be available to the general public. Public lectures in the dome allow audiences of over 100 people to comprehend dynamically changing geospatial datasets in an exciting and engaging fashion. Surveys and interviews show that these talks are effective in heightening visitor interest in the subjects weeks or months after the presentation. Many visitors take additional steps to learn more, while one was so inspired that she actively worked to bring the same programming to her children's school. These preliminary findings suggest that fulldome real-time visualizations can have a substantial long-term impact on an audience's engagement and interest in science topics.
Integrated web visualizations for protein-protein interaction databases.
Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas
2015-06-16
Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.
Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri
2015-01-01
Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5–6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. PMID:24698161
Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri
2015-01-01
Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5-6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
Students' use of atomic and molecular models in learning chemistry
NASA Astrophysics Data System (ADS)
O'Connor, Eileen Ann
1997-09-01
The objective of this study was to investigate the development of introductory college chemistry students' use of atomic and molecular models to explain physical and chemical phenomena. The study was conducted during the first semester of the course at a University and College II. Public institution (Carnegie Commission of Higher Education, 1973). Students' use of models was observed during one-on-one interviews conducted over the course of the semester. The approach to introductory chemistry emphasized models. Students were exposed to over two-hundred and fifty atomic and molecular models during lectures, were assigned text readings that used over a thousand models, and worked interactively with dozens of models on the computer. These models illustrated various features of the spatial organization of valence electrons and nuclei in atoms and molecules. Despite extensive exposure to models in lectures, in textbook, and in computer-based activities, the students in the study based their explanation in large part on a simple Bohr model (electrons arranged in concentric circles around the nuclei)--a model that had not been introduced in the course. Students used visual information from their models to construct their explanation, while overlooking inter-atomic and intra-molecular forces which are not represented explicitly in the models. In addition, students often explained phenomena by adding separate information about the topic without either integrating or logically relating this information into a cohesive explanation. The results of the study demonstrate that despite the extensive use of models in chemistry instruction, students do not necessarily apply them appropriately in explaining chemical and physical phenomena. The results of this study suggest that for the power of models as aids to learning to be more fully realized, chemistry professors must give more attention to the selection, use, integration, and limitations of models in their instruction.
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Explaining Algorithms: A New Perspective
ERIC Educational Resources Information Center
Müldner, Tomasz; Shakshuki, Elhadi
2006-01-01
This article presents a novel approach for explaining algorithms that aims to overcome various pedagogical limitations of the current visualization systems. The main idea is that at any given time, a learner is able to focus on a single problem. This problem can be explained, studied, understood, and tested, before the learner moves on to study…
Energy Finance Data Warehouse Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun
The Office of Energy Policy and Systems Analysis s finance team (EPSA-50) requires a suite of automated applications that can extract specific data from a flexible data warehouse (where datasets characterizing energy-related finance, economics and markets are maintained and integrated), perform relevant operations and creatively visualize them to provide a better understanding of what policy options affect various operators/sectors of the electricity system. In addition, the underlying data warehouse should be structured in the most effective and efficient way so that it can become increasingly valuable over time. This report describes the Energy Finance Data Warehouse (EFDW) framework that hasmore » been developed to accomplish the defined requirement above. We also specifically dive into the Sankey generator use-case scenario to explain the components of the EFDW framework and their roles. An excel-based data warehouse was used in the creation of the energy finance Sankey diagram and other detailed data finance visualizations to support energy policy analysis. The framework also captures the methodology, calculations and estimations analysts used for the calculation as well as relevant sources so newer analysts can build on work done previously.« less
MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement.
Fischer, Gregory S; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Dimaio, Simon P; Tempany, Clare M; Hata, Nobuhiko; Fichtinger, Gabor
2008-06-01
Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system.
A model of face selection in viewing video stories
Suda, Yuki; Kitazawa, Shigeru
2015-01-01
When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621
Skill dependent audiovisual integration in the fusiform induces repetition suppression.
McNorgan, Chris; Booth, James R
2015-02-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression
McNorgan, Chris; Booth, James R.
2015-01-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276
ERIC Educational Resources Information Center
Janko, Tomáš; Knecht, Petr
2013-01-01
The article focuses on the issue of visuals in geography textbooks and their assessment. The issue is explained from the perspective of educational and cognitive psychological theory and the theory of textbooks. The general purpose of the presented research is to develop a research instrument for categorising the types of visuals in geography…
Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.
Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk
2007-10-01
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.
Olfactory-visual integration facilitates perception of subthreshold negative emotion.
Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen
2015-10-01
A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Comparison of animated jet stream visualizations
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Hoffmann, Peter
2016-04-01
The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Behavior Selection of Mobile Robot Based on Integration of Multimodal Information
NASA Astrophysics Data System (ADS)
Chen, Bin; Kaneko, Masahide
Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.
Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-02-01
We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.
Perceptual asymmetry in texture perception.
Williams, D; Julesz, B
1992-07-15
A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.
Symbol-string sensitivity and adult performance in lexical decision.
Pammer, Kristen; Lavis, Ruth; Cooper, Charity; Hansen, Peter C; Cornelissen, Piers L
2005-09-01
In this study of adult readers, we used a symbol-string task to assess participants' sensitivity to the position of briefly presented, non-alphabetic but letter-like symbols. We found that sensitivity in this task explained a significant proportion of sample variance in visual lexical decision. Based on a number of controls, we show that this relationship cannot be explained by other factors including: chronological age, intelligence, speed of processing and/or concentration, short term memory consolidation, or fixation stability. This approach represents a new way to elucidate how, and to what extent, individual variation in pre-orthographic visual and cognitive processes impinge on reading skills, and the results suggest that limitations set by visuo-spatial processes constrain visual word recognition.
Employing Omnidirectional Visual Control for Mobile Robotics.
ERIC Educational Resources Information Center
Wright, J. R., Jr.; Jung, S.; Steplight, S.; Wright, J. R., Sr.; Das, A.
2000-01-01
Describes projects using conventional technologies--incorporation of relatively inexpensive visual control with mobile robots using a simple remote control vehicle platform, a camera, a mirror, and a computer. Explains how technology teachers can apply them in the classroom. (JOW)
Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.
Putzar, Lisa; Gondan, Matthias; Röder, Brigitte
2012-01-01
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.
Mines, Quarries and Landscape. Visuality and Transformation
NASA Astrophysics Data System (ADS)
Jimeno, Carlos López; Torrijos, Ignacio Díez; González, Carmen Mataix
2016-06-01
In this paper a review of two basic concepts is carried out: scenery and landscape integration, proposing a new concept: "visuality", alternative to the classical "visibility" used in landscape studies related to mining activity, which explores the qualitative aspects that define the visual relationships between observer and environment. In relation to landscape integration studies, some reflections on substantive issues are made which induce certain prejudices at the time of addressing the issue of mining operations landscape integration, and some guidance and integration strategies are formulated. In the second part of the text, a new approach to the landscape integration of mines and quarries is raised, closely linked to the concept of visuality which are based on a basic goal: the re-qualification of the place, and give innovative answers to re-qualify the place and show how to catch the opportunity in the deep transformation generated by the development of mining activities. As a conclusion, a case study is presented in the last section, the landscape integration study conducted on marble exploitations Coto Pinos (Alicante, Spain), considered the largest ornamental rock quarry in Europe.
Humphreys, Glyn W
2016-10-01
The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.
Explaining the Colavita visual dominance effect.
Spence, Charles
2009-01-01
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.
NASA Astrophysics Data System (ADS)
Tabarro, P. G.; Pouliot, J.; Fortier, R.; Losier, L.-M.
2017-10-01
For the planning and sustainable development of large cities, it is critical to accurately locate and map, in 3D, existing underground utility networks (UUN) such as pipelines, cables, ducts, and channels. An emerging non-invasive instrument for collecting underground data such as UUN is the ground-penetrating radar (GPR). Although its capabilities, handling GPR and extracting relevant information from its data are not trivial tasks. For instance, both GPR and its complimentary software stack provide very few capabilities to co-visualize GPR collected data and other sources of spatial data such as orthophotography, DEM or road maps. Furthermore, the GPR interface lacks functionalities for adding annotation, editing geometric objects or querying attributes. A new approach to support GPR survey is proposed in this paper. This approach is based on the integration of multiple sources of geospatial datasets and the use of a Web-GIS system and relevant functionalities adapted to interoperable GPR data acquisition. The Web-GIS is developed as an improved module in an existing platform called GVX. The GVX-GPR module provides an interactive visualization of multiple layers of structured spatial data, including GPR profiles. This module offers new features when compared to traditional GPR surveys such as geo-annotated points of interest for identifying spatial clues in the GPR profiles, integration of city contextual data, high definition drone and satellite pictures, as-built, and more. The paper explains the engineering approach used to design and develop the Web GIS and tests for this survey approach, mapping and recording UUN as part of 3D city model.
Features and functions of nonlinear spatial integration by retinal ganglion cells.
Gollisch, Tim
2013-11-01
Ganglion cells in the vertebrate retina integrate visual information over their receptive fields. They do so by pooling presynaptic excitatory inputs from typically many bipolar cells, which themselves collect inputs from several photoreceptors. In addition, inhibitory interactions mediated by horizontal cells and amacrine cells modulate the structure of the receptive field. In many models, this spatial integration is assumed to occur in a linear fashion. Yet, it has long been known that spatial integration by retinal ganglion cells also incurs nonlinear phenomena. Moreover, several recent examples have shown that nonlinear spatial integration is tightly connected to specific visual functions performed by different types of retinal ganglion cells. This work discusses these advances in understanding the role of nonlinear spatial integration and reviews recent efforts to quantitatively study the nature and mechanisms underlying spatial nonlinearities. These new insights point towards a critical role of nonlinearities within ganglion cell receptive fields for capturing responses of the cells to natural and behaviorally relevant visual stimuli. In the long run, nonlinear phenomena of spatial integration may also prove important for implementing the actual neural code of retinal neurons when designing visual prostheses for the eye. Copyright © 2012 Elsevier Ltd. All rights reserved.
Howard, Sara J; Perkins, Michael R; Sowden, Hannah
2012-10-01
Very little is known about the use of gesture by children with developmental language disorders (DLDs). This case study of 'Lucy', a child aged 4;10 with a DLD, expands on what is known and in particular focuses on a type of idiosyncratic "rhythmic gesture" (RG) not previously reported. A fine-grained qualitative analysis was carried out of video recordings of Lucy in conversation with the first author. This revealed that Lucy's RG was closely integrated in complex ways with her use of other gesture types, speech rhythm, word juncture, syntax, pragmatics, discourse, visual processing and processing demands generally. Indeed, the only satisfactory way to explain it was as a partial byproduct of such interactions. These findings support the theoretical accounts of gesture which see it as just one component of a multimodal, integrated signalling system (e.g. Goldin-Meadow, S. (2000). Beyond words: The importance of gesture to researchers and learners. Child Development, 71(1), 231-239), and emergentist accounts of communication impairment which regard compensatory adaptation as integral (e.g. Perkins, M. R. (2007). Pragmatic Impairment. Cambridge: Cambridge University Press.).
Audio-visual affective expression recognition
NASA Astrophysics Data System (ADS)
Huang, Thomas S.; Zeng, Zhihong
2007-11-01
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.
Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko
2008-01-01
To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.
ERIC Educational Resources Information Center
Schlosser, Peter Allen
2010-01-01
This paper reports on an investigation into how Marketing and Business Education Teachers utilize and integrate educational technology into curriculum through the use of photography. The ontology of this visual, technological, and language interface is explored with an eye toward visual literacy, digital literacy, and pedagogical praxis, focusing…
ERIC Educational Resources Information Center
Katz-Buonincontro, Jen; Foster, Aroutis
2013-01-01
Teachers can use mobile applications to integrate the visual arts back into the classroom, but how? This article generates recommendations for selecting and using well-designed mobile applications in the visual arts beyond a "click and view " approach. Using quantitative content analysis, the results show the extent to which a sample of…
The level of audiovisual print-speech integration deficits in dyslexia.
Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel
2014-09-01
The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No significant differences in basic measures of audiovisual integration emerged. Convergence of hemodynamic and electrophysiological signals appeared to be limited and mainly occurred for highly significant and large effects in visual cortices. The findings suggest efficient superior temporal tuning to audiovisual congruency in controls. In impaired readers, however, grapho-phonological conversion is effortful and inefficient, although basic audiovisual mechanisms seem intact. This unprecedented demonstration of audiovisual deficits in adolescent dyslexics provides critical evidence that the phonological deficit might be explained by impaired audiovisual integration at a phonetic level, especially for naturalistic and word-like stimulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rius, Anna; Artazcoz, Lucía; Guisasola, Laura; Benach, Joan
2014-01-01
The objectives of this study were to examine for the first time the prevalence of visual impairment and blindness among adults in Spain, to explore regional differences, and to assess whether they may vary as a function of sex or be explained by age and individual or regional socioeconomic position. Data were obtained from the 2008 Spanish Survey on Disability, Personal Autonomy, and Dependency Situations, a cross-sectional survey based on a representative sample of the noninstitutionalized population of Spain. The sample was composed of 213 626 participants aged ≥15 years (103 093 men and 110 533 women); 360 were blind (160 men and 200 women), 4048 had near visual impairment (1397 men and 2651 women), and 4034 had distance visual impairment (1445 men and 2589 women). The prevalence of near and distance visual impairment was calculated for each region. Multiple logistic regression models were fitted to calculate odds ratios and 95% confidence intervals. All analyses were stratified by sex. Visual impairment was based on 3 questions aimed at identifying blindness and near and distance visual impairment. The prevalence (percentage) of blindness was 0.17 (men, 0.16; women, 0.18): 1.89 for near visual impairment (men, 1.36; women, 2.40), 1.89 for distance visual impairment (men, 1.40; women, 2.34), and 2.43 for any visual impairment (men, 1.81; women, 3.02). Regional inequalities in the prevalence of visual impairment were observed, correlated with regional income, and the prevalence was consistently higher among women than men. The magnitude of the inequalities remained after adjusting for age and educational level, and a north-to-south pattern of increasing prevalence was observed. Regional and sex inequalities in the prevalence of visual impairment and blindness were observed in Spain, with a north-to-south gradient of increasing prevalence that was not explained by age or individual educational level but was correlated with regional level of economic development. Factors that could be prioritized for future policies and research include differential regional economic development, rural environment, quality of eye care services, diabetes, ultraviolet light exposure, or gender inequalities in diagnostic and therapeutic health care. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Numerosity estimation benefits from transsaccadic information integration
Hübner, Carolin; Schütz, Alexander C.
2017-01-01
Humans achieve a stable and homogeneous representation of their visual environment, although visual processing varies across the visual field. Here we investigated the circumstances under which peripheral and foveal information is integrated for numerosity estimation across saccades. We asked our participants to judge the number of black and white dots on a screen. Information was presented either in the periphery before a saccade, in the fovea after a saccade, or in both areas consecutively to measure transsaccadic integration. In contrast to previous findings, we found an underestimation of numerosity for foveal presentation and an overestimation for peripheral presentation. We used a maximum-likelihood model to predict accuracy and reliability in the transsaccadic condition based on peripheral and foveal values. We found near-optimal integration of peripheral and foveal information, consistently with previous findings about orientation integration. In three consecutive experiments, we disrupted object continuity between the peripheral and foveal presentations to probe the limits of transsaccadic integration. Even for global changes on our numerosity stimuli, no influence of object discontinuity was observed. Overall, our results suggest that transsaccadic integration is a robust mechanism that also works for complex visual features such as numerosity and is operative despite internal or external mismatches between foveal and peripheral information. Transsaccadic integration facilitates an accurate and reliable perception of our environment. PMID:29149766
Jane Kapler Smith; Donald E. Zimmerman; Carol Akerelrea; Garrett O' Keefe
2008-01-01
Natural resource managers use a variety of computer-mediated presentation methods to communicate management practices to the public. We explored the effects of using the Stand Visualization System to visualize and animate predictions from the Forest Vegetation Simulator-Fire and Fuels Extension in presentations explaining forest succession (forest growth and change...
Associative visual agnosia: a case study.
Charnallet, A; Carbonnel, S; David, D; Moreaud, O
2008-01-01
We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study, an alternative account in the framework of (non abstractive) episodic models of memory.
ERIC Educational Resources Information Center
Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn
2010-01-01
Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
An introduction to Space Weather Integrated Modeling
NASA Astrophysics Data System (ADS)
Zhong, D.; Feng, X.
2012-12-01
The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.
Agarwal, Rahul; Thakor, Nitish V; Sarma, Sridevi V; Massaquoi, Steve G
2015-06-24
The premotor cortex (PM) is known to be a site of visuo-somatosensory integration for the production of movement. We sought to better understand the ventral PM (PMv) by modeling its signal encoding in greater detail. Neuronal firing data was obtained from 110 PMv neurons in two male rhesus macaques executing four reach-grasp-manipulate tasks. We found that in the large majority of neurons (∼90%) the firing patterns across the four tasks could be explained by assuming that a high-dimensional position/configuration trajectory-like signal evolving ∼250 ms before movement was encoded within a multidimensional Gaussian field (MGF). Our findings are consistent with the possibility that PMv neurons process a visually specified reference command for the intended arm/hand position trajectory with respect to a proprioceptively or visually sensed initial configuration. The estimated MGF were (hyper) disc-like, such that each neuron's firing modulated strongly only with commands that evolved along a single direction within position/configuration space. Thus, many neurons appeared to be tuned to slices of this input signal space that as a collection appeared to well cover the space. The MGF encoding models appear to be consistent with the arm-referent, bell-shaped, visual target tuning curves and target selectivity patterns observed in PMV visual-motor neurons. These findings suggest that PMv may implement a lookup table-like mechanism that helps translate intended movement trajectory into time-varying patterns of activation in motor cortex and spinal cord. MGFs provide an improved nonlinear framework for potentially decoding visually specified, intended multijoint arm/hand trajectories well in advance of movement. Copyright © 2015 the authors 0270-6474/15/359508-18$15.00/0.
Vatovec, Christine
2013-01-01
Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. We report results from thirteen cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed three formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (pre-attentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared to abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: pre-attentive “incremental risk” meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals. PMID:22715919
Severtson, Dolores J; Vatovec, Christine
2012-08-01
Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. The authors report results from 13 cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed 3 formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (preattentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared with abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: preattentive "incremental risk" meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals.
A Visualization Tool for Integrating Research Results at an Underground Mine
NASA Astrophysics Data System (ADS)
Boltz, S.; Macdonald, B. D.; Orr, T.; Johnson, W.; Benton, D. J.
2016-12-01
Researchers with the National Institute for Occupational Safety and Health are conducting research at a deep, underground metal mine in Idaho to develop improvements in ground control technologies that reduce the effects of dynamic loading on mine workings, thereby decreasing the risk to miners. This research is multifaceted and includes: photogrammetry, microseismic monitoring, geotechnical instrumentation, and numerical modeling. When managing research involving such a wide range of data, understanding how the data relate to each other and to the mining activity quickly becomes a daunting task. In an effort to combine this diverse research data into a single, easy-to-use system, a three-dimensional visualization tool was developed. The tool was created using the Unity3d video gaming engine and includes the mine development entries, production stopes, important geologic structures, and user-input research data. The tool provides the user with a first-person, interactive experience where they are able to walk through the mine as well as navigate the rock mass surrounding the mine to view and interpret the imported data in the context of the mine and as a function of time. The tool was developed using data from a single mine; however, it is intended to be a generic tool that can be easily extended to other mines. For example, a similar visualization tool is being developed for an underground coal mine in Colorado. The ultimate goal is for NIOSH researchers and mine personnel to be able to use the visualization tool to identify trends that may not otherwise be apparent when viewing the data separately. This presentation highlights the features and capabilities of the mine visualization tool and explains how it may be used to more effectively interpret data and reduce the risk of ground fall hazards to underground miners.
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2001-10-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.
Behavioral, Modeling, and Electrophysiological Evidence for Supramodality in Human Metacognition.
Faivre, Nathan; Filevich, Elisa; Solovey, Guillermo; Kühn, Simone; Blanke, Olaf
2018-01-10
Human metacognition, or the capacity to introspect on one's own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and postdecisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated among auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities. SIGNIFICANCE STATEMENT Metacognitive monitoring is the capacity to access, report, and regulate one's own mental states. In perception, this allows rating our confidence in what we have seen, heard, or touched. Although metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates and by the influence of decisional cues on confidence estimates. Copyright © 2018 the authors 0270-6474/18/380263-15$15.00/0.
ERIC Educational Resources Information Center
Beekhoven, S.; De Jong, U.; Van Hout, H.
2002-01-01
Compared elements of rational choice theory and integration theory on the basis of their power to explain variance in academic progress. Asserts that the concepts should be combined, and the distinction between social and academic integration abandoned. Empirical analysis showed that an extended model, comprising both integration and rational…
Integration of today's digital state with tomorrow's visual environment
NASA Astrophysics Data System (ADS)
Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott
1996-03-01
New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.
Protein-Protein Interaction Network and Gene Ontology
NASA Astrophysics Data System (ADS)
Choi, Yunkyu; Kim, Seok; Yi, Gwan-Su; Park, Jinah
Evolution of computer technologies makes it possible to access a large amount and various kinds of biological data via internet such as DNA sequences, proteomics data and information discovered about them. It is expected that the combination of various data could help researchers find further knowledge about them. Roles of a visualization system are to invoke human abilities to integrate information and to recognize certain patterns in the data. Thus, when the various kinds of data are examined and analyzed manually, an effective visualization system is an essential part. One instance of these integrated visualizations can be combination of protein-protein interaction (PPI) data and Gene Ontology (GO) which could help enhance the analysis of PPI network. We introduce a simple but comprehensive visualization system that integrates GO and PPI data where GO and PPI graphs are visualized side-by-side and supports quick reference functions between them. Furthermore, the proposed system provides several interactive visualization methods for efficiently analyzing the PPI network and GO directedacyclic- graph such as context-based browsing and common ancestors finding.
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Cini, Alessandro; Ortolani, Irene; Zechini, Luigi; Cervo, Rita
2015-02-01
Insect social parasites have to conquer a host colony by overcoming its defensive barriers. In addition to increased fighting abilities, many social parasites evolved sophisticated sensory deception mechanisms to elude host colonies defenses by exploiting host communication channels. Recently, it has been shown that the conspicuous facial markings of a paper wasp social parasite, Polistes sulcifer, decrease the aggressiveness of host foundresses. Two main hypotheses stand as explanations of this phenomenon: visual sensory deception (i.e. the black patterning reduces host aggression by exploiting the host visual communication system) and visual quality assessment (i.e. facial markings reduce aggressiveness as they signal the increased fighting ability of parasites). Through behavioral assays and morphological measurements we tested three predictions resulting from these hypotheses and found no support either for the visual sensory deception or for the quality assessment to explain the reduction in host aggressiveness towards the parasite. Our results suggest that other discrimination processes may explain the observed phenomenon. Copyright © 2014 Elsevier B.V. All rights reserved.
Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.
2015-01-01
Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373
[Three-dimensional morphological modeling and visualization of wheat root system].
Tan, Feng; Tang, Liang; Hu, Jun-Cheng; Jiang, Hai-Yan; Cao, Wei-Xing; Zhu, Yan
2011-01-01
Crop three-dimensional (3D) morphological modeling and visualization is an important part of digital plant study. This paper aimed to develop a 3D morphological model of wheat root system based on the parameters of wheat root morphological features, and to realize the visualization of wheat root growth. According to the framework of visualization technology for wheat root growth, a 3D visualization model of wheat root axis, including root axis growth model, branch geometric model, and root axis curve model, was developed firstly. Then, by integrating root topology, the corresponding pixel was determined, and the whole wheat root system was three-dimensionally re-constructed by using the morphological feature parameters in the root morphological model. Finally, based on the platform of OpenGL, and by integrating the technologies of texture mapping, lighting rendering, and collision detection, the 3D visualization of wheat root growth was realized. The 3D output of wheat root system from the model was vivid, which could realize the 3D root system visualization of different wheat cultivars under different water regimes and nitrogen application rates. This study could lay a technical foundation for further development of an integral visualization system of wheat plant.
Integrated Visualization of Multi-sensor Ocean Data across the Web
NASA Astrophysics Data System (ADS)
Platt, F.; Thompson, C. K.; Roberts, J. T.; Tsontos, V. M.; Hin Lam, C.; Arms, S. C.; Quach, N.
2017-12-01
Whether for research or operational decision support, oceanographic applications rely on the visualization of multivariate in situ and remote sensing data as an integral part of analysis workflows. However, given their inherently 3D-spatial and temporally dynamic nature, the visual representation of marine in situ data in particular poses a challenge. The Oceanographic In situ data Interoperability Project (OIIP) is a collaborative project funded under the NASA/ACCESS program that seeks to leverage and enhance higher TRL (technology readiness level) informatics technologies to address key data interoperability and integration issues associated with in situ ocean data, including the dearth of effective web-based visualization solutions. Existing web tools for the visualization of key in situ data types - point, profile, trajectory series - are limited in their support for integrated, dynamic and coordinated views of the spatiotemporal characteristics of the data. Via the extension of the JPL Common Mapping Client (CMC) software framework, OIIP seeks to provide improved visualization support for oceanographic in situ data sets. More specifically, this entails improved representation of both horizontal and vertical aspects of these data, which inherently are depth resolved and time referenced, as well as the visual synchronization with relevant remotely-sensed gridded data products, such as sea surface temperature and salinity. Electronic tagging datasets, which are a focal use case for OIIP, provide a representative, if somewhat complex, visualization challenge in this regard. Critical to the achievement of these development objectives has been compilation of a well-rounded set of visualization use cases and requirements based on a series of end-user consultations aimed at understanding their satellite-in situ visualization needs. Here we summarize progress on aspects of the technical work and our approach.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri
2015-02-19
The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Neurophysiology underlying influence of stimulus reliability on audiovisual integration.
Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J
2018-01-24
We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.
Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk
2016-03-18
Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .
Prediction and constraint in audiovisual speech perception
Peelle, Jonathan E.; Sommers, Mitchell S.
2015-01-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390
Fundamentals of Structural Geology
NASA Astrophysics Data System (ADS)
Pollard, David D.; Fletcher, Raymond C.
2005-09-01
Fundamentals of Structural Geology provides a new framework for the investigation of geological structures by integrating field mapping and mechanical analysis. Assuming a basic knowledge of physical geology, introductory calculus and physics, it emphasizes the observational data, modern mapping technology, principles of continuum mechanics, and the mathematical and computational skills, necessary to quantitatively map, describe, model, and explain deformation in Earth's lithosphere. By starting from the fundamental conservation laws of mass and momentum, the constitutive laws of material behavior, and the kinematic relationships for strain and rate of deformation, the authors demonstrate the relevance of solid and fluid mechanics to structural geology. This book offers a modern quantitative approach to structural geology for advanced students and researchers in structural geology and tectonics. It is supported by a website hosting images from the book, additional colour images, student exercises and MATLAB scripts. Solutions to the exercises are available to instructors. The book integrates field mapping using modern technology with the analysis of structures based on a complete mechanics MATLAB is used to visualize physical fields and analytical results and MATLAB scripts can be downloaded from the website to recreate textbook graphics and enable students to explore their choice of parameters and boundary conditions The supplementary website hosts color images of outcrop photographs used in the text, supplementary color images, and images of textbook figures for classroom presentations The textbook website also includes student exercises designed to instill the fundamental relationships, and to encourage the visualization of the evolution of geological structures; solutions are available to instructors
Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration
Thorvaldsdóttir, Helga; Mesirov, Jill P.
2013-01-01
Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today’s sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license. PMID:22517427
Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration.
Thorvaldsdóttir, Helga; Robinson, James T; Mesirov, Jill P
2013-03-01
Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.
Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han
2017-03-01
High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.
Görg, Carsten; Liu, Zhicheng; Kihm, Jaeyeon; Choo, Jaegul; Park, Haesun; Stasko, John
2013-10-01
Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: an academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.
A Century of Gestalt Psychology in Visual Perception II. Conceptual and Theoretical Foundations
Wagemans, Johan; Feldman, Jacob; Gepshtein, Sergei; Kimchi, Ruth; Pomerantz, James R.; van der Helm, Peter A.; van Leeuwen, Cees
2012-01-01
Our first review paper on the occasion of the centennial anniversary of Gestalt psychology focused on perceptual grouping and figure-ground organization. It concluded that further progress requires a reconsideration of the conceptual and theoretical foundations of the Gestalt approach, which is provided here. In particular, we review contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications (e.g., at the level of attention, perception, and decision). We also review four lines of theoretical progress regarding the law of Prägnanz—the brain’s tendency of being attracted towards states corresponding to the simplest possible organization, given the available stimulation. The first considers the brain as a complex adaptive system and explains how self-organization solves the conundrum of trading between robustness and flexibility of perceptual states. The second specifies the economy principle in terms of optimization of neural resources, showing that elementary sensors working independently to minimize uncertainty can respond optimally at the system level. The third considers how Gestalt percepts (e.g., groups, objects) are optimal given the available stimulation, with optimality specified in Bayesian terms. Fourth, Structural Information Theory explains how a Gestaltist visual system that focuses on internal coding efficiency yields external veridicality as a side-effect. To answer the fundamental question of why things look as they do, a further synthesis of these complementary perspectives is required. PMID:22845750
A century of Gestalt psychology in visual perception: II. Conceptual and theoretical foundations.
Wagemans, Johan; Feldman, Jacob; Gepshtein, Sergei; Kimchi, Ruth; Pomerantz, James R; van der Helm, Peter A; van Leeuwen, Cees
2012-11-01
Our first review article (Wagemans et al., 2012) on the occasion of the centennial anniversary of Gestalt psychology focused on perceptual grouping and figure-ground organization. It concluded that further progress requires a reconsideration of the conceptual and theoretical foundations of the Gestalt approach, which is provided here. In particular, we review contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications (e.g., at the level of attention, perception, and decision). We also review 4 lines of theoretical progress regarding the law of Prägnanz-the brain's tendency of being attracted towards states corresponding to the simplest possible organization, given the available stimulation. The first considers the brain as a complex adaptive system and explains how self-organization solves the conundrum of trading between robustness and flexibility of perceptual states. The second specifies the economy principle in terms of optimization of neural resources, showing that elementary sensors working independently to minimize uncertainty can respond optimally at the system level. The third considers how Gestalt percepts (e.g., groups, objects) are optimal given the available stimulation, with optimality specified in Bayesian terms. Fourth, structural information theory explains how a Gestaltist visual system that focuses on internal coding efficiency yields external veridicality as a side effect. To answer the fundamental question of why things look as they do, a further synthesis of these complementary perspectives is required.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang
2011-07-01
In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Developmental trends in the facilitation of multisensory objects with distractors
Downing, Harriet C.; Barutchu, Ayla; Crewther, Sheila G.
2015-01-01
Sensory integration and the ability to discriminate target objects from distractors are critical to survival, yet the developmental trajectories of these abilities are unknown. This study investigated developmental changes in 9- (n = 18) and 11-year-old (n = 20) children, adolescents (n = 19) and adults (n = 22) using an audiovisual object discrimination task with uni- and multisensory distractors. Reaction times (RTs) were slower with visual/audiovisual distractors, and although all groups demonstrated facilitation of multisensory RTs in these conditions, children's and adolescents' responses corresponded to fewer race model violations than adults', suggesting protracted maturation of multisensory processes. Multisensory facilitation could not be explained by changes in RT variability, suggesting that tests of race model violations may still have theoretical value at least for familiar multisensory stimuli. PMID:25653630
Looking into the Eye with a Smartphone
NASA Astrophysics Data System (ADS)
Colicchia, Giuseppe; Wiesner, Hartmut
2015-02-01
Thanks to their sensors and the large number of apps available, smartphones can be used as a useful tool to carry out new laboratory experiments in physics.1-3 Such devices, very popular among young people, may be a successful approach to improve students' interest in the subject, particularly in a medical context. In addition to their small camera, smartphones usually have an integrated LED light source that is in line with the visual axis of the camera sensor. Using a smartphone, it is hence possible to take photos or videos of the fundus (retina) inside the eye through the pupil. We will explain the optical principles underlying the methods for observing the fundus of the eye (ophthalmoscopy) and describe how students can perform "fundus" photography on eye models using a smartphone.
Schizophrenia and the corpus callosum: developmental, structural and functional relationships.
David, A S
1994-10-20
Several empirical and theoretical connections exist between schizophrenia and the corpus callosum: (1) disconnection symptoms resemble certain psychotic phenomena; (2) abnormal interhemispheric transmission could explain typically schizophrenic phenomena; (3) cases of psychosis have been found in association with complete and partial agenesis of the callosum; (4) experimental neuropsychology with schizophrenic patients has revealed abnormal patterns of interhemispheric transfer; (5) studies using magnetic resonance imaging have shown abnormal callosal dimensions in schizophrenic patients. The evidence in support of these links is discussed critically. Novel neuropsychological approaches in the study of information transfer in the visual modality between the cerebral hemispheres, consistent with callosal hyperconnectivity in schizophrenic patients but not matched psychiatric controls are highlighted. Some suggestions for further work including integrating functional and structural measures are offered.
Teaching Tectonics to Undergraduates with Web GIS
NASA Astrophysics Data System (ADS)
Anastasio, D. J.; Bodzin, A.; Sahagian, D. L.; Rutzmoser, S.
2013-12-01
Geospatial reasoning skills provide a means for manipulating, interpreting, and explaining structured information and are involved in higher-order cognitive processes that include problem solving and decision-making. Appropriately designed tools, technologies, and curriculum can support spatial learning. We present Web-based visualization and analysis tools developed with Javascript APIs to enhance tectonic curricula while promoting geospatial thinking and scientific inquiry. The Web GIS interface integrates graphics, multimedia, and animations that allow users to explore and discover geospatial patterns that are not easily recognized. Features include a swipe tool that enables users to see underneath layers, query tools useful in exploration of earthquake and volcano data sets, a subduction and elevation profile tool which facilitates visualization between map and cross-sectional views, drafting tools, a location function, and interactive image dragging functionality on the Web GIS. The Web GIS platform is independent and can be implemented on tablets or computers. The GIS tool set enables learners to view, manipulate, and analyze rich data sets from local to global scales, including such data as geology, population, heat flow, land cover, seismic hazards, fault zones, continental boundaries, and elevation using two- and three- dimensional visualization and analytical software. Coverages which allow users to explore plate boundaries and global heat flow processes aided learning in a Lehigh University Earth and environmental science Structural Geology and Tectonics class and are freely available on the Web.
NASA Astrophysics Data System (ADS)
Walker, Robin Annette
A series of dissection tasks was developed in this mixed-methods study of student self-explanations of their learning using actual and virtual multidimensional science dissections and visuo-spatial instruction. Thirty-five seventh-grade students from a science classroom (N = 20 Female/15 Male, Age =13 years) were assigned to three dissection environments instructing them to: (a) construct static paper designs of frogs, (b) perform active dissections with formaldehyde specimens, and (c) engage with interactive 3D frog visualizations and virtual simulations. This multi-methods analysis of student engagement with anchored dissection materials found learning gains on labeling exercises and lab assessments among most students. Data revealed that students who correctly utilized multimedia text and diagrams, individually and collaboratively, manipulated 3D tools more effectively and were better able to self-explain and complete their dissection work. Student questionnaire responses corroborated that they preferred learning how to dissect a frog using 3D multimedia instruction. The data were used to discuss the impact of 3D technologies, programs, and activities on student learning, spatial reasoning, and their interest in science. Implications were drawn regarding how to best integrate 3D visualizations into science curricula as innovative learning options for students, as instructional alternatives for teachers, and as mandated dissection choices for those who object to physical dissections in schools.
[Studies of vision by Leonardo da Vinci].
Berggren, L
2001-01-01
Leonardo was an advocate of the intromission theory of vision. Light rays from the object to the eye caused visual perceptions which were transported to the brain ventricles via a hollow optic nerve. Leonardo introduced wax injections to explore the ventricular system. Perceptions were assumed to go to the "senso comune" in the middle (3rd) ventricle, also the seat of the soul. The processing station "imprensiva" in the anterior lateral horns together with memory "memoria" in th posterior (4th) ventricle integrated the visual perceptions to visual experience. - Leonardo's sketches with circular lenses in the center of the eye reveal that his dependence on medieval optics prevailed over anatomical observations. Drawings of the anatomy of the sectioned eye are missing although Leonardo had invented a new embedding technique. In order to dissect the eye without spilling its contents, the eye was first boiled in egg white and then cut. The procedure was now repeated and showed that the ovoid lens after boiling had become spherical. - Leonardo described that light rays were refracted and reflected in the eye but his imperfect anatomy prevented a development of physiological optics. He was, however, the first to compare the eye with a pin-hole camera (camera obscura). Leonardo's drawings of the inverted pictures on the back wall of a camera obscura inspired to its use as an instrument for artistic practice. The camera obscura was for centuries a model for explaining human vision.
Associative Visual Agnosia: A Case Study
Charnallet, A.; Carbonnel, S.; David, D.; Moreaud, O.
2008-01-01
We report a case of massive associative visual agnosia. In the light of current theories of identification and semantic knowledge organization, a deficit involving both levels of structural description system and visual semantics must be assumed to explain the case. We suggest, in line with a previous case study [1], an alternative account in the framework of (non abstractive) episodic models of memory [4]. PMID:18413915
Collinear facilitation and contour integration in autism: evidence for atypical visual integration.
Jachim, Stephen; Warren, Paul A; McLoughlin, Niall; Gowen, Emma
2015-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impaired social interaction, atypical communication and a restricted repertoire of interests and activities. Altered sensory and perceptual experiences are also common, and a notable perceptual difference between individuals with ASD and controls is their superior performance in visual tasks where it may be beneficial to ignore global context. This superiority may be the result of atypical integrative processing. To explore this claim we investigated visual integration in adults with ASD (diagnosed with Asperger's Syndrome) using two psychophysical tasks thought to rely on integrative processing-collinear facilitation and contour integration. We measured collinear facilitation at different flanker orientation offsets and contour integration for both open and closed contours. Our results indicate that compared to matched controls, ASD participants show (i) reduced collinear facilitation, despite equivalent performance without flankers; and (ii) less benefit from closed contours in contour integration. These results indicate weaker visuospatial integration in adults with ASD and suggest that further studies using these types of paradigms would provide knowledge on how contextual processing is altered in ASD.
Energy Systems Integration News | Energy Systems Integration Facility |
NREL group of children in front of a 3D visualization screen. Students from the OpenWorld Learning group interact with a wind turbine wind velocity simulation at the 3D visualization lab at the
On the effects of multimodal information integration in multitasking.
Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian
2017-07-07
There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).
Li, X; Yang, Y; Ren, G
2009-06-16
Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.
Frequency modulation of neural oscillations according to visual task demands.
Wutz, Andreas; Melcher, David; Samaha, Jason
2018-02-06
Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.
Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B
2008-04-02
Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.
Effect of Common Visual Dysfunctions on Reading.
ERIC Educational Resources Information Center
McPartland, Brian P.
1985-01-01
Six common visual dysfunctions are briefly explained and their relationships to reading noted: (1) ametropia, refractive error; (2) inaccurate saccades, the small jumping eye movements used in reading; (3) inefficient binocularity/fusion; (4) insufficient convergence/divergence; (5) heterophoria, imbalance in extra-ocular muscles; and (6)…
Rogé, Joceline; Gabaude, Catherine
2009-08-01
The goal of this study was to establish whether the deterioration of the useful visual field due to sleep deprivation and age in a screen monitoring activity could be explained by a decrease in perceptual sensitivity and/or a modification of the participant's decision criterion (two indices derived from signal detection theory). In the first experiment, a comparison of three age groups (young, middle-aged, elderly) showed that perceptual sensitivity decreased with age and that the decision criterion became more conservative. In the second experiment, measurement of the useful visual field was carried out on participants who had been deprived of sleep the previous night or had a complete night of sleep. Perceptual sensitivity significantly decreased with sleep debt, and sleep deprivation provoked an increase in the participants' decision criterion. Moreover, the comparison of two age groups (young, middle-aged) indicated that sensitivity decreased with age. The value of using these two indices to explain the deterioration of useful visual field is discussed.
Clayton, Francina J; Sears, Claire; Davis, Alice; Hulme, Charles
2018-07-01
Paired-associate learning (PAL) tasks measure the ability to form a novel association between a stimulus and a response. Performance on such tasks is strongly associated with reading ability, and there is increasing evidence that verbal task demands may be critical in explaining this relationship. The current study investigated the relationships between different forms of PAL and reading ability. A total of 97 children aged 8-10 years completed a battery of reading assessments and six different PAL tasks (phoneme-phoneme, visual-phoneme, nonverbal-nonverbal, visual-nonverbal, nonword-nonword, and visual-nonword) involving both familiar phonemes and unfamiliar nonwords. A latent variable path model showed that PAL ability is captured by two correlated latent variables: auditory-articulatory and visual-articulatory. The auditory-articulatory latent variable was the stronger predictor of reading ability, providing support for a verbal account of the PAL-reading relationship. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Direct visuomotor mapping for fast visually-evoked arm movements.
Reynolds, Raymond F; Day, Brian L
2012-12-01
In contrast to conventional reaction time (RT) tasks, saccadic RT's to visual targets are very fast and unaffected by the number of possible targets. This can be explained by the sub-cortical circuitry underlying eye movements, which involves direct mapping between retinal input and motor output in the superior colliculus. Here we asked if the choice-invariance established for the eyes also applies to a special class of fast visuomotor responses of the upper limb. Using a target-pointing paradigm we observed very fast reaction times (<150 ms) which were completely unaffected as the number of possible target choices was increased from 1 to 4. When we introduced a condition of altered stimulus-response mapping, RT went up and a cost of choice was observed. These results can be explained by direct mapping between visual input and motor output, compatible with a sub-cortical pathway for visual control of the upper limb. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Nye, Barbara A.
Data from a statewide screening of Tennessee Head Start children on the Developmental Test of Visual-Motor Integration (VMI) are analyzed in this report for two purposes: to determine whether sex, race, and residence have a significant influence on visual motor development as measured by the VMI, and to develop VMI norms for the Tennessee Head…
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Inter-area correlations in the ventral visual pathway reflect feature integration
Freeman, Jeremy; Donner, Tobias H.; Heeger, David J.
2011-01-01
During object perception, the brain integrates simple features into representations of complex objects. A perceptual phenomenon known as visual crowding selectively interferes with this process. Here, we use crowding to characterize a neural correlate of feature integration. Cortical activity was measured with functional magnetic resonance imaging, simultaneously in multiple areas of the ventral visual pathway (V1–V4 and the visual word form area, VWFA, which responds preferentially to familiar letters), while human subjects viewed crowded and uncrowded letters. Temporal correlations between cortical areas were lower for crowded letters than for uncrowded letters, especially between V1 and VWFA. These differences in correlation were retinotopically specific, and persisted when attention was diverted from the letters. But correlation differences were not evident when we substituted the letters with grating patches that were not crowded under our stimulus conditions. We conclude that inter-area correlations reflect feature integration and are disrupted by crowding. We propose that crowding may perturb the transformations between neural representations along the ventral pathway that underlie the integration of features into objects. PMID:21521832
Liu, Baolin; Wang, Zhongning; Jin, Zhixing
2009-09-11
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.
White matter tract integrity predicts visual search performance in young and older adults.
Bennett, Ilana J; Motes, Michael A; Rao, Neena K; Rypma, Bart
2012-02-01
Functional imaging research has identified frontoparietal attention networks involved in visual search, with mixed evidence regarding whether different networks are engaged when the search target differs from distracters by a single (elementary) versus multiple (conjunction) features. Neural correlates of visual search, and their potential dissociation, were examined here using integrity of white matter connecting the frontoparietal networks. The effect of aging on these brain-behavior relationships was also of interest. Younger and older adults performed a visual search task and underwent diffusion tensor imaging (DTI) to reconstruct 2 frontoparietal (superior and inferior longitudinal fasciculus; SLF and ILF) and 2 midline (genu, splenium) white matter tracts. As expected, results revealed age-related declines in conjunction, but not elementary, search performance; and in ILF and genu tract integrity. Importantly, integrity of the superior longitudinal fasciculus, ILF, and genu tracts predicted search performance (conjunction and elementary), with no significant age group differences in these relationships. Thus, integrity of white matter tracts connecting frontoparietal attention networks contributes to search performance in younger and older adults. Copyright © 2012 Elsevier Inc. All rights reserved.
White Matter Tract Integrity Predicts Visual Search Performance in Young and Older Adults
Bennett, Ilana J.; Motes, Michael A.; Rao, Neena K.; Rypma, Bart
2011-01-01
Functional imaging research has identified fronto-parietal attention networks involved in visual search, with mixed evidence regarding whether different networks are engaged when the search target differs from distracters by a single (elementary) versus multiple (conjunction) features. Neural correlates of visual search, and their potential dissociation, were examined here using integrity of white matter connecting the fronto-parietal networks. The effect of aging on these brain-behavior relationships was also of interest. Younger and older adults performed a visual search task and underwent diffusion tensor imaging (DTI) to reconstruct two fronto-parietal (superior and inferior longitudinal fasciculus, SLF and ILF) and two midline (genu, splenium) white matter tracts. As expected, results revealed age-related declines in conjunction, but not elementary, search performance; and in ILF and genu tract integrity. Importantly, integrity of the SLF, ILF, and genu tracts predicted search performance (conjunction and elementary), with no significant age group differences in these relationships. Thus, integrity of white matter tracts connecting fronto-parietal attention networks contributes to search performance in younger and older adults. PMID:21402431
Towal, R Blythe; Mormann, Milica; Koch, Christof
2013-10-01
Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.
From attentional gating in macaque primary visual cortex to dyslexia in humans.
Vidyasagar, T R
2001-01-01
Selective attention is an important aspect of brain function that we need in coping with the immense and constant barrage of sensory information. One model of attention (Feature Integration Theory) that suggests an early selection of spatial locations of objects via an attentional spotlight would also solve the 'binding problem' (that is how do different attributes of each object get correctly bound together?). Our experiments have demonstrated modulation of specific locations of interest at the level of the primary visual cortex both in visual discrimination and memory tasks, where the actual locations of the targets was also important in being able to perform the task. It is suggested that the feedback mediating the modulation arises from the posterior parietal cortex, which would also be consistent with its known role in attentional control. In primates, the magnocellular (M) and parvocellular (P) pathways are the two major streams of inputs from the retina, carrying distinctly different types of information and they remain fairly segregated in their projections to the primary visual cortex and further into the extra-striate regions. The P inputs go mainly into the ventral (temporal) stream, while the dorsal (parietal) stream is dominated by M inputs. A theory of attentional gating is proposed here where the M dominated dorsal stream gates the P inputs into the ventral stream. This framework is used to provide a neural explanation of the processes involved in reading and in learning to read. This scheme also explains how a magnocellular deficit could cause the common reading impairment, dyslexia.
Towal, R. Blythe; Mormann, Milica; Koch, Christof
2013-01-01
Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496
2015-06-01
Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis
Neural networks supporting audiovisual integration for speech: A large-scale lesion study.
Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius
2018-06-01
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rousseaux, Marc; Braem, Bérenger; Honoré, Jacques; Saj, Arnaud
2015-08-01
Brain hemisphere lesions often cause a contralesional tilt of the subjective vertical (SV) a phenomenon related to spatial neglect and postural disorders. Depending on the method employed, different perceptual systems come into play when this gravitational vertical is assessed. Here, we compared the anatomical and psychophysical characteristics of modality-dependent SV biases in patients with right hemisphere stroke. The SV was measured with visual, haptic and visual-haptic modalities (SV, SVV, SVHV) in 46 patients with a relatively recent stroke. Voxel-based lesion-symptom mapping (performed with NPM(®)) was used to highlight brain areas in which lesions best explained the severity of task biases (p < .05). Lesions explaining the SVV tilt (TSVV) were centered on the posterior part of the middle temporal gyrus, those explaining the TSHV were more limited and anterior, without convergence with the former. Lesions explaining the TSVHV were centered on the superior temporal gyrus and more anterior those explaining the TSVV, with convergence with lesions explaining both the TSVV and the TSHV. Patients showed counterclockwise deviations in the SVs. Constant and variable errors were greater for the SHV than for the SVV and for the SVHV. The TSVV and TVHV were closely related to the presence of left spatial neglect and hemianopia. Errors in the SVV and (at a lesser degree) SVHV were preferentially related to lesions in visual associative cortex. The SVV and especially the SVHV provide valuable estimates of patient difficulties, in view of the lower associated variable errors (i.e., greater precision) and closer relationships with clinical disorders. Copyright © 2015 Elsevier Ltd. All rights reserved.
Audio visual summary: Implementing PURPA in Mid-America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The audio-visual presentation, Implementing PURPA in Mid-America, is a slide presentation designed to complement deliverable W-101-2, a booklet entitled Implementing PURPA in Mid-America: A Guide to the Public Utility Regulatory Policies Act. The presentation lasts 10 to 12 min and explains the major sections of PURPA, the rules promulgated by the Federal Energy Regulatory Commission to implement PURPA, and the implications of PURPA and its rules. It delineates the rights and responsibilities of citizens who want to sell electricity to utilities, explains the certification process, and discusses the rights and responsibilities of the utilities.
The picture superiority effect in categorization: visual or semantic?
Job, R; Rumiati, R; Lotto, L
1992-09-01
Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.
Visual Constructive and Visual-Motor Skills in Deaf Native Signers
ERIC Educational Resources Information Center
Hauser, Peter C.; Cohen, Julie; Dye, Matthew W. G.; Bavelier, Daphne
2007-01-01
Visual constructive and visual-motor skills in the deaf population were investigated by comparing performance of deaf native signers (n = 20) to that of hearing nonsigners (n = 20) on the Beery-Buktenica Developmental Test of Visual-Motor Integration, Rey-Osterrieth Complex Figure Test, Wechsler Memory Scale Visual Reproduction subtest, and…
Santer, Roger D.
2014-01-01
Palpalis group tsetse flies are the major vectors of human African trypanosomiasis, and visually-attractive targets and traps are important tools for their control. Considerable efforts are underway to optimise these visual baits, and one factor that has been investigated is coloration. Analyses of the link between visual bait coloration and tsetse fly catches have used methods which poorly replicate sensory processing in the fly visual system, but doing so would allow the visual information driving tsetse attraction to these baits to be more fully understood, and the reflectance spectra of candidate visual baits to be more completely analysed. Following methods well established for other species, I reanalyse the numbers of tsetse flies caught at visual baits based upon the calculated photoreceptor excitations elicited by those baits. I do this for large sets of previously published data for Glossina fuscipes fuscipes (Lindh et al. (2012). PLoS Negl Trop Dis 6: e1661), G. palpalis palpalis (Green (1988). Bull Ent Res 78: 591), and G. pallidipes (Green and Flint (1986). Bull Ent Res 76: 409). Tsetse attraction to visual baits in these studies can be explained by a colour opponent mechanism to which the UV-blue photoreceptor R7y contributes positively, and both the green-yellow photoreceptor R8y, and the low-wavelength UV photoreceptor R7p, contribute negatively. A tool for calculating fly photoreceptor excitations is made available with this paper, and this will facilitate a complete and biologically authentic description of visual bait reflectance spectra that can be employed in the search for more efficacious visual baits, or the analysis of future studies of tsetse fly attraction. PMID:25473844
Santer, Roger D
2014-12-01
Palpalis group tsetse flies are the major vectors of human African trypanosomiasis, and visually-attractive targets and traps are important tools for their control. Considerable efforts are underway to optimise these visual baits, and one factor that has been investigated is coloration. Analyses of the link between visual bait coloration and tsetse fly catches have used methods which poorly replicate sensory processing in the fly visual system, but doing so would allow the visual information driving tsetse attraction to these baits to be more fully understood, and the reflectance spectra of candidate visual baits to be more completely analysed. Following methods well established for other species, I reanalyse the numbers of tsetse flies caught at visual baits based upon the calculated photoreceptor excitations elicited by those baits. I do this for large sets of previously published data for Glossina fuscipes fuscipes (Lindh et al. (2012). PLoS Negl Trop Dis 6: e1661), G. palpalis palpalis (Green (1988). Bull Ent Res 78: 591), and G. pallidipes (Green and Flint (1986). Bull Ent Res 76: 409). Tsetse attraction to visual baits in these studies can be explained by a colour opponent mechanism to which the UV-blue photoreceptor R7y contributes positively, and both the green-yellow photoreceptor R8y, and the low-wavelength UV photoreceptor R7p, contribute negatively. A tool for calculating fly photoreceptor excitations is made available with this paper, and this will facilitate a complete and biologically authentic description of visual bait reflectance spectra that can be employed in the search for more efficacious visual baits, or the analysis of future studies of tsetse fly attraction.
Dissociable Effects of Aging and Mild Cognitive Impairment on Bottom-Up Audiovisual Integration.
Festa, Elena K; Katz, Andrew P; Ott, Brian R; Tremont, Geoffrey; Heindel, William C
2017-01-01
Effective audiovisual sensory integration involves dynamic changes in functional connectivity between superior temporal sulcus and primary sensory areas. This study examined whether disrupted connectivity in early Alzheimer's disease (AD) produces impaired audiovisual integration under conditions requiring greater corticocortical interactions. Audiovisual speech integration was examined in healthy young adult controls (YC), healthy elderly controls (EC), and patients with amnestic mild cognitive impairment (MCI) using McGurk-type stimuli (providing either congruent or incongruent audiovisual speech information) under conditions differing in the strength of bottom-up support and the degree of top-down lexical asymmetry. All groups accurately identified auditory speech under congruent audiovisual conditions, and displayed high levels of visual bias under strong bottom-up incongruent conditions. Under weak bottom-up incongruent conditions, however, EC and amnestic MCI groups displayed opposite patterns of performance, with enhanced visual bias in the EC group and reduced visual bias in the MCI group relative to the YC group. Moreover, there was no overlap between the EC and MCI groups in individual visual bias scores reflecting the change in audiovisual integration from the strong to the weak stimulus conditions. Top-down lexicality influences on visual biasing were observed only in the MCI patients under weaker bottom-up conditions. Results support a deficit in bottom-up audiovisual integration in early AD attributable to disruptions in corticocortical connectivity. Given that this deficit is not simply an exacerbation of changes associated with healthy aging, tests of audiovisual speech integration may serve as sensitive and specific markers of the earliest cognitive change associated with AD.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Prediction and constraint in audiovisual speech perception.
Peelle, Jonathan E; Sommers, Mitchell S
2015-07-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spatial integration and cortical dynamics.
Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G
1996-01-23
Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.
Supporting Knowledge Integration in Chemistry with a Visualization-Enhanced Inquiry Unit
NASA Astrophysics Data System (ADS)
Chiu, Jennifer L.; Linn, Marcia C.
2014-02-01
This paper describes the design and impact of an inquiry-oriented online curriculum that takes advantage of dynamic molecular visualizations to improve students' understanding of chemical reactions. The visualization-enhanced unit uses research-based guidelines following the knowledge integration framework to help students develop coherent understanding by connecting and refining existing and new ideas. The inquiry unit supports students to develop connections among molecular, observable, and symbolic representations of chemical reactions. Design-based research included a pilot study, a study comparing the visualization-enhanced inquiry unit to typical instruction, and a course-long comparison study featuring a delayed posttest. Students participating in the visualization-enhanced unit outperformed students receiving typical instruction and further consolidated their understanding on the delayed posttest. Students who used the visualization-enhanced unit formed more connections among concepts than students with typical textbook and lecture-based instruction. Item analysis revealed the types of connections students made when studying the curriculum and suggested how these connections enabled students to consolidate their understanding as they continued in the chemistry course. Results demonstrate that visualization-enhanced inquiry designed for knowledge integration can improve connections between observable and atomic-level phenomena and serve students well as they study subsequent topics in chemistry.
ERIC Educational Resources Information Center
Cubberley, Carol W.
1991-01-01
Discusses written procedures that explain library tasks and describes methods for writing them clearly and coherently. The use of appropriate terminology and vocabulary is discussed; the value of illustrations, typography, and format to enhance the visual effect is explained; the intended audience is considered; and examples are given. (seven…
Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.
2016-01-01
In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663
The Inversion of Sensory Processing by Feedback Pathways: A Model of Visual Cognitive Functions.
ERIC Educational Resources Information Center
Harth, E.; And Others
1987-01-01
Explains the hierarchic structure of the mammalian visual system. Proposes a model in which feedback pathways serve to modify sensory stimuli in ways that enhance and complete sensory input patterns. Investigates the functioning of the system through computer simulations. (ML)
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Harlow C. Landphair
1979-01-01
This paper relates the evolution of an empirical model used to predict public response to scenic quality objectively. The text relates the methods used to develop the visual quality index model, explains the terms used in the equation and briefly illustrates how the model is applied and how it is tested. While the technical application of the model relies heavily on...
Gender differences in episodic memory and visual working memory including the effects of age.
Pauls, Franz; Petermann, Franz; Lepach, Anja Christina
2013-01-01
Analysing the relationship between gender and memory, and examining the effects of age on the overall memory-related functioning, are the ongoing goals of psychological research. The present study examined gender and age group differences in episodic memory with respect to the type of task. In addition, these subgroup differences were also analysed in visual working memory. A sample of 366 women and 330 men, aged between 16 and 69 years of age, participated in the current study. Results indicate that women outperformed men on auditory memory tasks, whereas male adolescents and older male adults showed higher level performances on visual episodic and visual working memory measures. However, the size of gender-linked effects varied somewhat across age groups. Furthermore, results partly support a declining performance on episodic memory and visual working memory measures with increasing age. Although age-related losses in episodic memory could not be explained by a decreasing verbal and visuospatial ability with age, women's advantage in auditory episodic memory could be explained by their advantage in verbal ability. Men's higher level visual episodic memory performance was found to result from their advantage in visuospatial ability. Finally, possible methodological, biological, and cognitive explanations for the current findings are discussed.
Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Data Analysis and Visualization; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA
2008-05-12
The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii)more » evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.« less
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
A normalization model suggests that attention changes the weighting of inputs between visual areas
Cohen, Marlene R.
2017-01-01
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501
A normalization model suggests that attention changes the weighting of inputs between visual areas.
Ruff, Douglas A; Cohen, Marlene R
2017-05-16
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.
Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.
Hang, Giao B; Dan, Yang
2011-01-01
Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.
NASA Astrophysics Data System (ADS)
Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei
2017-07-01
This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri
2015-01-01
The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154
Compensating for age limits through emotional crossmodal integration
Chaby, Laurence; Boullay, Viviane Luherne-du; Chetouani, Mohamed; Plaza, Monique
2015-01-01
Social interactions in daily life necessitate the integration of social signals from different sensory modalities. In the aging literature, it is well established that the recognition of emotion in facial expressions declines with advancing age, and this also occurs with vocal expressions. By contrast, crossmodal integration processing in healthy aging individuals is less documented. Here, we investigated the age-related effects on emotion recognition when faces and voices were presented alone or simultaneously, allowing for crossmodal integration. In this study, 31 young adults (M = 25.8 years) and 31 older adults (M = 67.2 years) were instructed to identify several basic emotions (happiness, sadness, anger, fear, disgust) and a neutral expression, which were displayed as visual (facial expressions), auditory (non-verbal affective vocalizations) or crossmodal (simultaneous, congruent facial and vocal affective expressions) stimuli. The results showed that older adults performed slower and worse than younger adults at recognizing negative emotions from isolated faces and voices. In the crossmodal condition, although slower, older adults were as accurate as younger except for anger. Importantly, additional analyses using the “race model” demonstrate that older adults benefited to the same extent as younger adults from the combination of facial and vocal emotional stimuli. These results help explain some conflicting results in the literature and may clarify emotional abilities related to daily life that are partially spared among older adults. PMID:26074845
Yeh, Mei-Ling; Chen, Hsing-Hsia; Chung, Yu-Chu
2012-12-01
This study used a larger sample size, added a long-term observation of the effect of intervention, and provided an integrated intervention of acupressure and interactive multimedia of visual health instruction for school children. The short- and long-term effects of the interventions were then evaluated by visual health knowledge, visual acuity, and refractive error. A repeated pretest-posttest controlled trial was used with two experimental groups and one control group. Four elementary schools in northern Taiwan. 287 School children with visual impairment in fourth grade were recruited. One experimental group received the integrative intervention of acupressure and interactive multimedia of visual health instruction (ACIMU), and another received auricular acupressure (AC) alone; whereas a control group received no intervention. Two 10-week interventions were separately given in the fall and spring semesters. The short- and long-term effects of the interventions were then evaluated by visual health knowledge, visual acuity, and refractive error. During the school year the visual health knowledge was significantly higher in the ACIMU group than the control group (p<0.001). A significant difference in the changing visual acuity was in the three groups (p<0.001), with the improvement in the ACIMU group. No difference in the refractive error was found between any two groups (p>0.05). This study demonstrated that a long-term period of acupressure is required to improve school children's visual health. School children receiving the intervention of acupressure combined with interactive multimedia had better improvement of visual health and related knowledge than others. Further study is suggested in which visual health and preventative needs can be established for early childhood. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lahav, Orit; Apter, Alan; Ratzon, Navah Z
2013-01-01
This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low socioeconomic backgrounds, scoring below the 25th percentile on a measure of visual-motor integration (VMI) were recruited and randomly divided into two parallel intervention groups. One intervention group received directive visual-motor intervention (DVMI), while the second intervention group received a non-directive supportive intervention (NDSI). Tests were administered to evaluate visual-motor integration skills outcome. Children with higher baseline measures of psychological adjustment and self-esteem responded better in NDSI while children with lower baseline performance on psychological adjustment and self-esteem responded better in DVMI. This study suggests that children from low socioeconomic backgrounds with low VMI performance scores will benefit more from intervention programs if clinicians choose the type of intervention according to baseline psychological adjustment and self-esteem measures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Shape and color conjunction stimuli are represented as bound objects in visual working memory.
Luria, Roy; Vogel, Edward K
2011-05-01
The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-01-01
Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-10-15
Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.
Building Stories about Sea Level Rise through Interactive Visualizations
NASA Astrophysics Data System (ADS)
Stephens, S. H.; DeLorme, D. E.; Hagen, S. C.
2013-12-01
Digital media provide storytellers with dynamic new tools for communicating about scientific issues via interactive narrative visualizations. While traditional storytelling uses plot, characterization, and point of view to engage audiences with underlying themes and messages, interactive visualizations can be described as 'narrative builders' that promote insight through the process of discovery (Dove, G. & Jones, S. 2012, Proc. IHCI 2012). Narrative visualizations are used in online journalism to tell complex stories that allow readers to select aspects of datasets to explore and construct alternative interpretations of information (Segel, E. & Heer, J. 2010, IEEE Trans. Vis. Comp. Graph.16, 1139), thus enabling them to participate in the story-building process. Nevertheless, narrative visualizations also incorporate author-selected narrative elements that help guide and constrain the overall themes and messaging of the visualization (Hullman, J. & Diakopoulos, N. 2011, IEEE Trans. Vis. Comp. Graph. 17, 2231). One specific type of interactive narrative visualization that is used for science communication is the sea level rise (SLR) viewer. SLR viewers generally consist of a base map, upon which projections of sea level rise scenarios can be layered, and various controls for changing the viewpoint and scenario parameters. They are used to communicate the results of scientific modeling and help readers visualize the potential impacts of SLR on the coastal zone. Readers can use SLR viewers to construct personal narratives of the effects of SLR under different scenarios in locations that are important to them, thus extending the potential reach and impact of scientific research. With careful selection of narrative elements that guide reader interpretation, the communicative aspects of these visualizations may be made more effective. This presentation reports the results of a content analysis of a subset of existing SLR viewers selected in order to comprehensively identify and characterize the narrative elements that contribute to this storytelling medium. The results describe four layers of narrative elements in these viewers: data, visual representations, annotations, and interactivity; and explain the ways in which these elements are used to communicate about SLR. Most existing SLR viewers have been designed with attention to technical usability; however, careful design of narrative elements could increase their overall effectiveness as story-building tools. The analysis concludes with recommendations for narrative elements that should be considered when designing new SLR viewers, and offers suggestions for integrating these components to balance author-driven and reader-driven design features for more effective messaging.
Sturz, Bradley R; Green, Marshall L; Gaskin, Katherine A; Evans, Alicia C; Graves, April A; Roberts, Jonathan E
2013-02-15
View-based matching theories of orientation suggest that mobile organisms encode a visual memory consisting of a visual panorama from a target location and maneuver to reduce discrepancy between current visual perception and this stored visual memory to return to a location. Recent success of such theories to explain the orientation behavior of insects and birds raises questions regarding the extent to which such an explanation generalizes to other species. In the present study, we attempted to determine the extent to which such view-based matching theories may explain the orientation behavior of a mammalian species (in this case adult humans). We modified a traditional enclosure orientation task so that it involved only the use of the haptic sense. The use of a haptic orientation task to investigate the extent to which view-based matching theories may explain the orientation behavior of adult humans appeared ideal because it provided an opportunity for us to explicitly prohibit the use of vision. Specifically, we trained disoriented and blindfolded human participants to search by touch for a target object hidden in one of four locations marked by distinctive textural cues located on top of four discrete landmarks arranged in a rectangular array. Following training, we removed the distinctive textural cues and probed the extent to which participants learned the geometry of the landmark array. In the absence of vision and the trained textural cues, participants showed evidence that they learned the geometry of the landmark array. Such evidence cannot be explained by an appeal to view-based matching strategies and is consistent with explanations of spatial orientation related to the incidental learning of environmental geometry.
R-Based Software for the Integration of Pathway Data into Bioinformatic Algorithms
Kramer, Frank; Bayerlová, Michaela; Beißbarth, Tim
2014-01-01
Putting new findings into the context of available literature knowledge is one approach to deal with the surge of high-throughput data results. Furthermore, prior knowledge can increase the performance and stability of bioinformatic algorithms, for example, methods for network reconstruction. In this review, we examine software packages for the statistical computing framework R, which enable the integration of pathway data for further bioinformatic analyses. Different approaches to integrate and visualize pathway data are identified and packages are stratified concerning their features according to a number of different aspects: data import strategies, the extent of available data, dependencies on external tools, integration with further analysis steps and visualization options are considered. A total of 12 packages integrating pathway data are reviewed in this manuscript. These are supplemented by five R-specific packages for visualization and six connector packages, which provide access to external tools. PMID:24833336
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
ERIC Educational Resources Information Center
Chandramouli, Magesh; Chittamuru, Siva-Teja
2016-01-01
This paper explains the design of a graphics-based virtual environment for instructing computer hardware concepts to students, especially those at the beginner level. Photorealistic visualizations and simulations are designed and programmed with interactive features allowing students to practice, explore, and test themselves on computer hardware…
Increased Content Knowledge of Students with Visual Impairments as a Result of Extended Descriptions
ERIC Educational Resources Information Center
Ely, Richard; Emerson, Robert Wall; Maggiore, Theresa; Rothberg, Madeleine; O'Connell, Trisha; Hudson, Laurel
2006-01-01
The National Center for Accessible Media has developed a technology and protocol for inserting extended, enhanced descriptions of visually based concepts into artificially paused digital video. These "eDescriptions" describe material not fully explained by a narrator and provide analogies and explanation specifically designed for…
Why Visual Literacy: Consciousness and Convention
ERIC Educational Resources Information Center
Rezabek, Landra L.
2005-01-01
In this article, the author discusses the intentions of the October 2005 Association for Educational Communications & Technology (AECT) conference. She explains that the conference will be a shared event between the AECT members and the participants of the 37th annual meeting of the International Visual Literacy Association (IVLA), a stalwart…
Facilitative Orthographic Neighborhood Effects: The SERIOL Model Account
ERIC Educational Resources Information Center
Whitney, Carol; Lavidor, Michal
2005-01-01
A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the…
Travel Tales. A Mobility Storybook.
ERIC Educational Resources Information Center
Halpern-Gold, Julia; And Others
The book is designed to supplement mobility and orientation lessons and explain mobility concepts to visually impaired children from preschool through third grade. Each of the 17 chapters centers on the adventures of Eliot, a young visually impaired child, as he learns the following pre-cane orientation and mobility skills: sighted-guide…
DOT National Transportation Integrated Search
2009-12-01
The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge : Engineers at the state and local level from the following aspects: : Better understanding and enforcement of a complex ...
MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement
Fischer, Gregory S.; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; DiMaio, Simon P.; Tempany, Clare M.; Hata, Nobuhiko; Fichtinger, Gabor
2010-01-01
Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system. PMID:21057608
Moes, Anne; Vliet, Harry van
2017-06-01
Consumer behaviour in 2016 shows that (r)etailers need online/offline integration to better serve their clients. An important distinguishing feature of the physical shop is how it can offer consumers a shopping experience. This study uses two experiments to research the extent a fashion store's shopping experience can be presented to consumers via visual material (a regular photo, a 360-degree photo and a virtual reality photo of the shop) without the consumers being in the shop itself. The effects of these visual materials will also be measured in (among others) terms of purchase intention, visiting intention to the physical shop and online visit satisfaction. A theoretical framework is used to substantiate how the three types of pictures can be classified in terms of medium richness. The completed experiments show, among other outcomes, that consumers who saw the virtual reality photo of the shop have a more positive shopping experience, a higher purchase intention, a higher intention to visit the physical shop and more online visit satisfaction than people who have only seen the regular photo or the 360-degree photo of the shop. Enjoyment and novelty seem to partly explain these found effects.
Attractiveness Is Multimodal: Beauty Is Also in the Nose and Ear of the Beholder.
Groyecka, Agata; Pisanski, Katarzyna; Sorokowska, Agnieszka; Havlíček, Jan; Karwowski, Maciej; Puts, David; Roberts, S Craig; Sorokowski, Piotr
2017-01-01
Attractiveness plays a central role in human non-verbal communication and has been broadly examined in diverse subfields of contemporary psychology. Researchers have garnered compelling evidence in support of the evolutionary functions of physical attractiveness and its role in our daily lives, while at the same time, having largely ignored the significant contribution of non-visual modalities and the relationships among them. Acoustic and olfactory cues can, separately or in combination, strongly influence the perceived attractiveness of an individual and therefore attitudes and actions toward that person. Here, we discuss the relative importance of visual, auditory and olfactory traits in judgments of attractiveness, and review neural and behavioral studies that support the highly complex and multimodal nature of person perception. Further, we discuss three alternative evolutionary hypotheses aimed at explaining the function of multiple indices of attractiveness. In this review, we provide several lines of evidence supporting the importance of the voice, body odor, and facial and body appearance in the perception of attractiveness and mate preferences, and therefore the critical need to incorporate cross-modal perception and multisensory integration into future research on human physical attractiveness.
GenomeGraphs: integrated genomic data visualization with R.
Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine
2009-01-06
Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.
Short temporal asynchrony disrupts visual object recognition
Singer, Jedediah M.; Kreiman, Gabriel
2014-01-01
Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738
Briand, K A; Klein, R M
1987-05-01
In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.
Higher integrity of the motor and visual pathways in long-term video game players.
Zhang, Yang; Du, Guijin; Yang, Yongxin; Qin, Wen; Li, Xiaodong; Zhang, Quan
2015-01-01
Long term video game players (VGPs) exhibit superior visual and motor skills compared with non-video game control subjects (NVGCs). However, the neural basis underlying the enhanced behavioral performance remains largely unknown. To clarify this issue, the present study compared the whiter matter integrity within the corticospinal tracts (CST), the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus (ILF), and the inferior fronto-occipital fasciculus (IFOF) between the VGPs and the NVGCs using diffusion tensor imaging. Compared with the NVGCs, voxel-wise comparisons revealed significantly higher fractional anisotropy (FA) values in some regions within the left CST, left SLF, bilateral ILF, and IFOF in VGPs. Furthermore, higher FA values in the left CST at the level of cerebral peduncle predicted a faster response in visual attention tasks. These results suggest that higher white matter integrity in the motor and higher-tier visual pathways is associated with long-term video game playing, which may contribute to the understanding on how video game play influences motor and visual performance.
Higher integrity of the motor and visual pathways in long-term video game players
Du, Guijin; Yang, Yongxin; Qin, Wen; Li, Xiaodong; Zhang, Quan
2015-01-01
Long term video game players (VGPs) exhibit superior visual and motor skills compared with non-video game control subjects (NVGCs). However, the neural basis underlying the enhanced behavioral performance remains largely unknown. To clarify this issue, the present study compared the whiter matter integrity within the corticospinal tracts (CST), the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus (ILF), and the inferior fronto-occipital fasciculus (IFOF) between the VGPs and the NVGCs using diffusion tensor imaging. Compared with the NVGCs, voxel-wise comparisons revealed significantly higher fractional anisotropy (FA) values in some regions within the left CST, left SLF, bilateral ILF, and IFOF in VGPs. Furthermore, higher FA values in the left CST at the level of cerebral peduncle predicted a faster response in visual attention tasks. These results suggest that higher white matter integrity in the motor and higher-tier visual pathways is associated with long-term video game playing, which may contribute to the understanding on how video game play influences motor and visual performance. PMID:25805981
Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.
2016-01-01
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021
Pathview Web: user friendly pathway visualization and data integration
Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory
2017-01-01
Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
van Wyk, Andoret; Eksteen, Carina A; Rheeder, Paul
2014-01-01
Unilateral spatial neglect (USN) is a visual-perceptual disorder that entails the inability to perceive and integrate stimuli on one side of the body, resulting in the neglect of one side of the body. Stroke patients with USN present with extensive functional disability and duration of therapy input. To determine the effect of saccadic eye movement training with visual scanning exercises (VSEs) integrated with task-specific activities on USN poststroke. A matched-pair randomized control trial was conducted. Subjects were matched according to their functional activity level and allocated to either a control (n = 12) or an experimental group (n = 12). All patients received task-specific activities for a 4-week intervention period. The experimental group received saccadic eye movement training with VSE integrated with task specific activities as an "add on" intervention. Assessments were conducted weekly over the intervention period. Statistical significant difference was noted on the King-Devick Test (P = .021), Star Cancellation Test (P = .016), and Barthel Index (P = .004). Intensive saccadic eye movement training with VSE integrated with task-specific activities has a significant effect on USN in patients poststroke. Results of this study are supported by findings from previously reviewed literature in the sense that the effect of saccadic eye movement training with VSE as an intervention approach has a significant effect on the visual perceptual processing of participants with USN poststroke. The significant improved visual perceptual processing translate to significantly better visual function and ability to perform activities of daily living following the stroke. © The Author(s) 2014.
Fetsch, Christopher R; Wang, Sentao; Gu, Yong; Deangelis, Gregory C; Angelaki, Dora E
2007-01-17
Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Visual Learning in Application of Integration
NASA Astrophysics Data System (ADS)
Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah
Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.
Kukona, Anuenue; Tabor, Whitney
2011-01-01
The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355
Pasricha, Neel D; Bhullar, Paramjit K; Shieh, Christine; Carrasco-Zevallos, Oscar M; Keller, Brenton; Izatt, Joseph A; Toth, Cynthia A; Freedman, Sharon F; Kuo, Anthony N
2017-02-14
The authors report the use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT), capable of live four-dimensional (three-dimensional across time) intraoperative imaging, to directly visualize suture depth during lateral rectus resection. Key surgical steps visualized in this report included needle depth during partial and full-thickness muscle passes along with scleral passes. [J Pediatr Ophthalmol Strabismus. 2017;54:e1-e5.]. Copyright 2017, SLACK Incorporated.
Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan
2015-08-01
Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.
GENETICS OF WHITE MATTER DEVELOPMENT: A DTI STUDY OF 705 TWINS AND THEIR SIBLINGS AGED 12 TO 29
Chiang, Ming-Chang; McMahon, Katie L.; de Zubicaray, Greig I.; Martin, Nicholas G.; Hickie, Ian; Toga, Arthur W.; Wright, Margaret J.; Thompson, Paul M.
2011-01-01
White matter microstructure is under strong genetic control, yet it is largely unknown how genetic influences change from childhood into adulthood. In one of the largest brain mapping studies ever performed, we determined whether the genetic control over white matter architecture depends on age, sex, socioeconomic status (SES), and intelligence quotient (IQ). We assessed white matter integrity voxelwise using diffusion tensor imaging at high magnetic field (4-Tesla), in 705 twins and their siblings (age range 12–29; 290 M/415 F). White matter integrity was quantified using a widely accepted measure, fractional anisotropy (FA). We fitted gene-environment interaction models pointwise, to visualize brain regions where age, sex, SES and IQ modulate heritability of fiber integrity. We hypothesized that environmental factors would start to outweigh genetic factors during late childhood and adolescence. Genetic influences were greater in adolescence versus adulthood, and greater in males than in females. Socioeconomic status significantly interacted with genes that affect fiber integrity: heritability was higher in those with higher SES. In people with above-average IQ, genetic factors explained over 800% of the observed FA variability in the thalamus, genu, posterior internal capsule, and superior corona radiata. In those with below-average IQ, however, only around 40% FA variability in the same regions was attributable to genetic factors. Genes affect fiber integrity, but their effects vary with age, sex, SES and IQ. Gene-environment interactions are vital to consider in the search for specific genetic polymorphisms that affect brain integrity and connectivity. PMID:20950689
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
ERIC Educational Resources Information Center
Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei
2011-01-01
Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Div. of Career-Technical and Adult Education.
This document contains an introduction to the Ohio Integrated Technical and Academic Competency (ITAC) and Specialization ITAC; an overview of the visual communications art profession; a list acknowledging professionals who helped develop the competency list; and the comprehensive list of the professional or occupational competencies deemed…
Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words
ERIC Educational Resources Information Center
Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard
2016-01-01
Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…
Visual-Auditory Integration during Speech Imitation in Autism
ERIC Educational Resources Information Center
Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…
Visual-Motor Integration in Children with Prader-Willi Syndrome
ERIC Educational Resources Information Center
Lo, S. T.; Collin, P. J. L.; Hokken-Koelega, A. C. S.
2015-01-01
Background: Prader-Willi syndrome (PWS) is characterised by hypotonia, hypogonadism, short stature, obesity, behavioural problems, intellectual disability, and delay in language, social and motor development. There is very limited knowledge about visual-motor integration in children with PWS. Method: Seventy-three children with PWS aged 7-17 years…
The Perceptual Basis of the Modality Effect in Multimedia Learning
ERIC Educational Resources Information Center
Rummer, Ralf; Schweppe, Judith; Furstenberg, Anne; Scheiter, Katharina; Zindler, Antje
2011-01-01
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial…
Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M
2018-03-05
Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .
Sounds can boost the awareness of visual events through attention without cross-modal integration.
Pápai, Márta Szabina; Soto-Faraco, Salvador
2017-01-31
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?
Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling
2016-01-01
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
PsyToolkit: a software package for programming psychological experiments using Linux.
Stoet, Gijsbert
2010-11-01
PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.
Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli.
Minakata, Katsumi; Gondan, Matthias
2018-05-01
When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds.
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
A training tool for visual aids. Using tracing techniques to create visual aids.
Clark, M; Walters, J E; Wileman, R
1982-01-01
This training tool explains the use of tracing techniques to create visuals requiring few materials and no training of special skills in drawing. Magazines, books, posters, and many other materials contain photographs and drawings which can be used to create visual aids for health training and public health education. The materials required are pencils, an eraser, crayons or colored marking pens, paper clips, tracing and drawing paper, carbon paper, and sources of visual images. The procedure is described. The material was prepared by INTRAH staff members. Other materials include how to evaluate teaching, how to create a family health case study and training in group dynamics.
Liu, Rong; Zhou, Jiawei; Zhao, Haoxin; Dai, Yun; Zhang, Yudong; Tang, Yong; Zhou, Yifeng
2014-01-01
This study aimed to explore the neural development status of the visual system of children (around 8 years old) using contrast sensitivity. We achieved this by eliminating the influence of higher order aberrations (HOAs) with adaptive optics correction. We measured HOAs, modulation transfer functions (MTFs) and contrast sensitivity functions (CSFs) of six children and five adults with both corrected and uncorrected HOAs. We found that when HOAs were corrected, children and adults both showed improvements in MTF and CSF. However, the CSF of children was still lower than the adult level, indicating the difference in contrast sensitivity between groups cannot be explained by differences in optical factors. Further study showed that the difference between the groups also could not be explained by differences in non-visual factors. With these results we concluded that the neural systems underlying vision in children of around 8 years old are still immature in contrast sensitivity. PMID:24732728
[To explain is to narrate. How to visualize scientific data].
Hawtin, Nigel
2014-01-01
When you try to appeal a vast ranging audience, as it occurs at the New Scientist that addresses scientists as well as the general public, your scientific visual explainer must be succinct, clear, accurate and easily understandable. In order to reach this goal, your message should provide only the main data, the ones that allow you to balance information and clarity: information should be put into context and all the extra details should be cut down. It is very important, then, to know well both your audience and the subject you are going to describe, as graphic masters of the past, like William Playfair and Charles Minard, have taught us. Moreover, you should try to engage your reader connecting the storytelling power of words and the driving force of the graphics: colours, visual elements, typography. To be effective, in fact, an infographic should not only be truthful and functional, but also elegant, having style and legibility.
Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan
2015-01-01
In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952
Explaining Changing Suicide Rates in Norway 1948-2004: The Role of Social Integration
ERIC Educational Resources Information Center
Barstad, Anders
2008-01-01
Using Norway 1948-2004 as a case, I test whether changes in variables related to social integration can explain changes in suicide rates. The method is the Box-Jenkins approach to time-series analysis. Different aspects of family integration contribute significantly to the explanation of Norwegian suicide rates in this period. The estimated effect…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Pullum, Laura L; Steed, Chad A
2013-01-01
n this paper, we present an overview of the big data chal- lenges in disease bio-surveillance and then discuss the use of visual analytics for integrating data and turning it into knowl- edge. We will explore two integration scenarios: (1) combining text and multimedia sources to improve situational awareness and (2) enhancing disease spread model data with real-time bio-surveillance data. Together, the proposed integration methodologies can improve awareness about when, where and how emerging diseases can affect wide geographic regions.
Liu, Hong; Zhang, Gaoyan; Liu, Baolin
2017-04-01
In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.
The Communication Strategy of NASA's Earth Observatory
NASA Astrophysics Data System (ADS)
Simmon, R.; Ward, K.; Riebeek, H.; Allen, J.; Przyborski, P.; Scott, M.; Carlowicz, M. J.
2010-12-01
Climate change is a complex, multi-disciplinary subject. Accurately conveying this complexity to general audiences, while still communicating the basic facts, is challenging. Our approach is to combine climate change information with a wide range of Earth system science topics, illustrated by satellite imagery and data visualizations. NASA's Earth Observatory web site (earthobservatory.nasa.gov) uses the broad range of NASA's remote sensing technologies, data, and research to communicate climate change science. We serve two primary audiences: the "attentive public" --people interested in and willing to seek out information about science, technology, and the environment--and media. We cover the breadth of Earth science, with information about climate change integrated with stories about weather, geology, oceanography, and solar flares. Current event-driven imagery is used as a hook to draw readers. We then supply links to supplemental information, either about current research or the scientific basics. We use analogies, carefully explain jargon or acronyms, and build narratives which both attract readers and make information easier to remember. These narratives are accompanied by primers on topics like energy balance or the water cycle. Text is carefully integrated with illustrations and state-of-the-art data visualizations. Other site features include a growing list of climate questions and answers, addressing common misconceptions about global warming and climate change. Maps of global environmental parameters like temperature, rainfall, and vegetation show seasonal change and long-term trends. Blogs from researchers in the field provide a look at the day-to-day process of science. For the media, public domain imagery is supplied at full resolution and links are provided to primary sources.
Attention modulates trans-saccadic integration.
Stewart, Emma E M; Schütz, Alexander C
2018-01-01
With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Mensh, B D; Aksay, E; Lee, D D; Seung, H S; Tank, D W
2004-03-01
To quantify performance of the goldfish oculomotor neural integrator and determine its dependence on visual feedback, we measured the relationship between eye drift-velocity and position during spontaneous gaze fixations in the light and in the dark. In the light, drift-velocities were typically less than 1 deg/s, similar to those observed in humans. During brief periods in darkness, drift-velocities were only slightly larger, but showed greater variance. One hour in darkness degraded fixation-holding performance. These findings suggest that while visual feedback is not essential for online fixation stability, it may be used to tune the mechanism of persistent neural activity in the oculomotor integrator.
ERIC Educational Resources Information Center
Simmering, Vanessa R.; Patterson, Rebecca
2012-01-01
Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…
Perceiving the Present and a Systematization of Illusions
ERIC Educational Resources Information Center
Changizi, Mark A.; Hsieh, Andrew; Nijhawan, Romi; Kanai, Ryota; Shimojo, Shinsuke
2008-01-01
Over the history of the study of visual perception there has been great success at discovering countless visual illusions. There has been less success in organizing the overwhelming variety of illusions into empirical generalizations (much less explaining them all via a unifying theory). Here, this article shows that it is possible to…
Penn State's Visual Image User Study
ERIC Educational Resources Information Center
Pisciotta, Henry A.; Dooris, Michael J.; Frost, James; Halm, Michael
2005-01-01
The Visual Image User Study (VIUS), an extensive needs assessment project at Penn State University, describes academic users of pictures and their perceptions. These findings outline the potential market for digital images and list the likely determinates of whether or not a system will be used. They also explain some key user requirements for…
Explaining Physical Activity in Children with Visual Impairments: A Family Systems Approach
ERIC Educational Resources Information Center
Ayvazoglu, Nalan R.; Oh, Hyun-Kyoung; Kozub, Francis M.
2006-01-01
Using a mixed design this study explored physical activity in children with visual impairments from a family perspective. Quantitative findings revealed varied amounts of physical activity; younger children were more active than older participants. Further, parents were involved in moderate to vigorous physical activity 0% to 21% of the time when…
A Visual Short-Term Memory Advantage for Objects of Expertise
ERIC Educational Resources Information Center
Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel
2009-01-01
Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects; this advantage may stem from the holistic nature of face processing. If the holistic processing explains this advantage, object expertise--which also relies on holistic processing--should endow experts…
Visual-Spatial Art and Design Literacy as a Prelude to Aesthetic Growth
ERIC Educational Resources Information Center
Lerner, Fern
2018-01-01
In bridging ideas from the forum of visual-spatial learning with those of art and design learning, inspiration is taken from Piaget who explained that the evolution of spatial cognition occurs through perception, as well as through thought and imagination. Insights are embraced from interdisciplinary educational theorists, intertwining and…
ERIC Educational Resources Information Center
Parnafes, Orit
2012-01-01
This article presents a theoretical model of the process by which students construct and elaborate explanations of scientific phenomena using visual representations. The model describes progress in the underlying conceptual processes in students' explanations as a reorganization of fine-grained knowledge elements based on the Knowledge in Pieces…
A composite measure to explore visual disability in primary progressive multiple sclerosis.
Poretto, Valentina; Petracca, Maria; Saiote, Catarina; Mormina, Enricomaria; Howard, Jonathan; Miller, Aaron; Lublin, Fred D; Inglese, Matilde
2017-01-01
Optical coherence tomography (OCT) and magnetic resonance imaging (MRI) can provide complementary information on visual system damage in multiple sclerosis (MS). The objective of this paper is to determine whether a composite OCT/MRI score, reflecting cumulative damage along the entire visual pathway, can predict visual deficits in primary progressive multiple sclerosis (PPMS). Twenty-five PPMS patients and 20 age-matched controls underwent neuro-ophthalmologic evaluation, spectral-domain OCT, and 3T brain MRI. Differences between groups were assessed by univariate general linear model and principal component analysis (PCA) grouped instrumental variables into main components. Linear regression analysis was used to assess the relationship between low-contrast visual acuity (LCVA), OCT/MRI-derived metrics and PCA-derived composite scores. PCA identified four main components explaining 80.69% of data variance. Considering each variable independently, LCVA 1.25% was significantly predicted by ganglion cell-inner plexiform layer (GCIPL) thickness, thalamic volume and optic radiation (OR) lesion volume (adjusted R 2 0.328, p = 0.00004; adjusted R 2 0.187, p = 0.002 and adjusted R 2 0.180, p = 0.002). The PCA composite score of global visual pathway damage independently predicted both LCVA 1.25% (adjusted R 2 value 0.361, p = 0.00001) and LCVA 2.50% (adjusted R 2 value 0.323, p = 0.00003). A multiparametric score represents a more comprehensive and effective tool to explain visual disability than a single instrumental metric in PPMS.
NASA Astrophysics Data System (ADS)
Shen, Ji; Linn, Marcia C.
2011-08-01
What trajectories do students follow as they connect their observations of electrostatic phenomena to atomic-level visualizations? We designed an electrostatics unit, using the knowledge integration framework to help students link observations and scientific ideas. We analyze how learners integrate ideas about charges, charged particles, energy, and observable events. We compare learning enactments in a typical school and a magnet school in the USA. We use pre-tests, post-tests, embedded notes, and delayed post-tests to capture the trajectories of students' knowledge integration. We analyze how visualizations help students grapple with abstract electrostatics concepts such as induction. We find that overall students gain more sophisticated ideas. They can interpret dynamic, interactive visualizations, and connect charge- and particle-based explanations to interpret observable events. Students continue to have difficulty in applying the energy-based explanation.
Visual Representations of the Water Cycle in Science Textbooks
ERIC Educational Resources Information Center
Vinisha, K.; Ramadas, J.
2013-01-01
Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…
Location perception: the X-Files parable.
Prinzmetal, William
2005-01-01
Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.
NASA Astrophysics Data System (ADS)
Dockendorff, Monika; Solar, Horacio
2018-01-01
This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions about teaching and learning mathematics. This paper describes how GeoGebra-based dynamic applets - designed and used in an exploratory manner - promote mathematical processes such as conjectures. It also refers to the changes prospective teachers experience regarding the relevance visual dynamic representations acquire in teaching mathematics. This study observes a shift in school routines when incorporating technology into the mathematics classroom. Visualization appears as a basic competence associated to key mathematical processes. Implications of an early integration of ICT in mathematics initial teacher training and its impact on developing technological pedagogical content knowledge (TPCK) are drawn.
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
[Visual impairment and blindness in children in a Malawian school for the blind].
Schulze Schwering, M; Nyrenda, M; Spitzer, M S; Kalua, K
2013-08-01
The aim of this study was to determine the anatomic sites of severe visual impairment and blindness in children in an integrated school for the blind in Malawi, and to compare the results with those of previous Malawian blind school studies. Children attending an integrated school for the blind in Malawi were examined in September 2011 using the standard WHO/PBL eye examination record for children with blindness and low vision. Visual acuity [VA] of the better eye was classified using the standardised WHO reporting form. Fifty-five pupils aged 6 to 19 years were examined, 39 (71 %) males, and 16 (29 %) females. Thirty eight (69%) were blind [BL], 8 (15 %) were severely visually impaired [SVI], 8 (15 %) visually impaired [VI], and 1 (1.8 %) was not visually impaired [NVI]. The major anatomic sites of visual loss were optic nerve (16 %) and retina (16 %), followed by lens/cataract (15 %), cornea (11 %) and lesions of the whole globe (11 %), uveal pathologies (6 %) and cortical blindness (2 %). The exact aetiology of VI or BL could not be determined in most children. Albinism accounted for 13 % (7/55) of the visual impairments. 24 % of the cases were considered to be potentially avoidable: refractive amblyopia among pseudophakic patients and corneal scaring. Optic atrophy, retinal diseases (mostly albinism) and cataracts were the major causes of severe visual impairment and blindness in children in an integrated school for the blind in Malawi. Corneal scarring was now the fourth cause of visual impairment, compared to being the commonest cause 35 years ago. Congenital cataract and its postoperative outcome were the commonest remedial causes of visual impairment. Georg Thieme Verlag KG Stuttgart · New York.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.
Magen, Hagit
2017-03-01
Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.
Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.
Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc
2017-09-01
Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.
The role of visual and mechanosensory cues in structuring forward flight in Drosophila melanogaster.
Budick, Seth A; Reiser, Michael B; Dickinson, Michael H
2007-12-01
It has long been known that many flying insects use visual cues to orient with respect to the wind and to control their groundspeed in the face of varying wind conditions. Much less explored has been the role of mechanosensory cues in orienting insects relative to the ambient air. Here we show that Drosophila melanogaster, magnetically tethered so as to be able to rotate about their yaw axis, are able to detect and orient into a wind, as would be experienced during forward flight. Further, this behavior is velocity dependent and is likely subserved, at least in part, by the Johnston's organs, chordotonal organs in the antennae also involved in near-field sound detection. These wind-mediated responses may help to explain how flies are able to fly forward despite visual responses that might otherwise inhibit this behavior. Expanding visual stimuli, such as are encountered during forward flight, are the most potent aversive visual cues known for D. melanogaster flying in a tethered paradigm. Accordingly, tethered flies strongly orient towards a focus of contraction, a problematic situation for any animal attempting to fly forward. We show in this study that wind stimuli, transduced via mechanosensory means, can compensate for the aversion to visual expansion and thus may help to explain how these animals are indeed able to maintain forward flight.
Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G
2017-01-01
Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.
Shape Recognition in Infancy: Visual Integration of Sequential Information.
ERIC Educational Resources Information Center
Rose, Susan A
1988-01-01
Investigated infants' integration of visual information across space and time. In four experiments, infants aged 12 months and 6 months viewed objects after watching light trace similar and dissimilar shapes. Infants looked longer at novel shapes, although six-month-olds did not recognize figures taking more than 10 seconds to trace. One-year-old…
DOT National Transportation Integrated Search
2009-12-01
This volume introduces several applications of remote bridge inspection technologies studied in : this Integrated Remote Sensing and Visualization (IRSV) study using ground-based LiDAR : systems. In particular, the application of terrestrial LiDAR fo...
DOT National Transportation Integrated Search
2012-03-01
The Integrated Remote Sensing and Visualization System (IRSV) was developed in Phase One of this project in order to : accommodate the needs of todays Bridge Engineers at the state and local level. Overall goals of this project are: : Better u...
A Visual Approach to Helping Instructors Integrate, Document, and Refine Active Learning
ERIC Educational Resources Information Center
Frey, Regina F.; Fisher, Beth A.; Solomon, Erin D.; Leonard, Denise A.; Mutambuki, Jacinta M.; Cohen, Cheryl A.; Luo, Jia; Pondugula, Santhi
2016-01-01
This article describes a visual approach to integrating observational data into self-evaluation and peer review of teaching, practices that can lead to adoption of evidence-based active-learning strategies in STEM. This approach was designed to be implemented for undergraduate courses across disciplines. The presentation of observational data in a…
Visual Criterion for Understanding the Notion of Convergence if Integrals in One Parameter
ERIC Educational Resources Information Center
Alves, Francisco Regis Vieira
2014-01-01
Admittedly, the notion of generalized integrals in one parameter has a fundamental role. En virtue that, in this paper, we discuss and characterize an approach for to promote the visualization of this scientific mathematical concept. We still indicate the possibilities of graphical interpretation of formal properties related to notion of…
ERIC Educational Resources Information Center
Pennington, Kristin; Calico, Carley; Edgar, Leslie D.; Edgar, Don W.; Johnson, Donald M.
2015-01-01
The University of Arkansas developed and integrated visual communications curriculum related to agricultural communications into secondary agricultural programs throughout the state. The curriculum was developed, pilot tested, revised, and implemented by selected secondary agriculture teachers. The primary purpose of this study was to evaluate…
Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course
ERIC Educational Resources Information Center
Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio
2012-01-01
In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…
We propose a 2-day session combining multiple components of an ongoing integrative research program in USEPA’s Office of Research and Development into a functional community sustainability visualization and assessment tool. The working group will include project leads for a US H...
UTOOLS: microcomputer software for spatial analysis and landscape visualization.
Alan A. Ager; Robert J. McGaughey
1997-01-01
UTOOLS is a collection of programs designed to integrate various spatial data in a way that allows versatile spatial analysis and visualization. The programs were designed for watershed-scale assessments in which a wide array of resource data must be integrated, analyzed, and interpreted. UTOOLS software combines raster, attribute, and vector data into "spatial...
Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite
2016-09-01
aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system
DOT National Transportation Integrated Search
2009-12-01
The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge Engineers at the : state and local level from several aspects that were documented in Volume One, Summary Report. The followi...
Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo
2011-01-01
In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423
Theoretical integration and the psychology of sport injury prevention.
Chan, Derwin King-Chung; Hagger, Martin S
2012-09-01
Integrating different theories of motivation to facilitate or predict behaviour change has received an increasing amount of attention within the health, sport and exercise science literature. A recent review article in Sports Medicine, by Keats, Emery and Finch presented an integrated model using two prominent theories in social psychology, self-determination theory (SDT) and the theory of planned behaviour (TPB), aimed at explaining and enhancing athletes' adherence to sport injury prevention. While echoing their optimistic views about the utility of these two theories to explain adherence in this area and the virtues of theoretical integration, we would like to seize this opportunity to clarify several conceptual principles arising from the authors' integration of the theories. Clarifying the theoretical assumptions and explaining precisely how theoretical integration works is crucial not only for improving the comprehensiveness of the integrated framework for predicting injury prevention behaviour, but also to aid the design of effective intervention strategies targeting behavioural adherence. In this article, we use the integration of SDT and TPB as an example to demonstrate how theoretical integration can advance the understanding of injury prevention behaviour in sport.
Does gravity influence the visual line bisection task?
Drakul, A; Bockisch, C J; Tarnutzer, A A
2016-08-01
The visual line bisection task (LBT) is sensitive to perceptual biases of visuospatial attention, showing slight leftward (for horizontal lines) and upward (for vertical lines) errors in healthy subjects. It may be solved in an egocentric or allocentric reference frame, and there is no obvious need for graviceptive input. However, for other visual line adjustments, such as the subjective visual vertical, otolith input is integrated. We hypothesized that graviceptive input is incorporated when performing the LBT and predicted reduced accuracy and precision when roll-tilted. Twenty healthy right-handed subjects repetitively bisected Earth-horizontal and body-horizontal lines in darkness. Recordings were obtained before, during, and after roll-tilt (±45°, ±90°) for 5 min each. Additionally, bisections of Earth-vertical and oblique lines were obtained in 17 subjects. When roll-tilted ±90° ear-down, bisections of Earth-horizontal (i.e., body-vertical) lines were shifted toward the direction of the head (P < 0.001). However, after correction for vertical line-bisection errors when upright, shifts disappeared. Bisecting body-horizontal lines while roll-tilted did not cause any shifts. The precision of Earth-horizontal line bisections decreased (P ≤ 0.006) when roll-tilted, while no such changes were observed for body-horizontal lines. Regardless of the trial condition and paradigm, the scanning direction of the bisecting cursor (leftward vs. rightward) significantly (P ≤ 0.021) affected line bisections. Our findings reject our hypothesis and suggest that gravity does not modulate the LBT. Roll-tilt-dependent shifts are instead explained by the headward bias when bisecting lines oriented along a body-vertical axis. Increased variability when roll-tilted likely reflects larger variability when bisecting body-vertical than body-horizontal lines. Copyright © 2016 the American Physiological Society.
Stone, Edwin M.; Cideciyan, Artur V.; Aleman, Tomas S.; Scheetz, Todd E.; Sumaroka, Alexander; Ehlinger, Mary A.; Schwartz, Sharon B.; Fishman, Gerald A.; Traboulsi, Elias I.; Lam, Byron L.; Fulton, Anne B.; Mullins, Robert F.; Sheffield, Val C.; Jacobson, Samuel G.
2014-01-01
Objective To investigate whether mutations in NPHP5 can cause Leber congenital amaurosis (LCA) without early-onset renal disease. Methods DNA samples from 276 individuals with non-syndromic LCA were screened for variations in the NPHP5 gene. Each had been previously screened for mutations in 8 known LCA genes without identifying a disease-causing genotype. Results Nine of the 276 LCA probands (3.2%) harbored 2 plausible disease-causing mutations (7 different alleles) in NPHP5. Four of these have been previously reported in patients with Senior-Loken syndrome (F141del, R461X, H506del, and R489X) and 3 are novel (A111del, E346X, and R455X). All 9 patients had severe visual loss from early childhood but none had overt renal disease in the first decade of life. Two patients were diagnosed with nephronophthisis in the second decade. Retinal imaging studies showed retained photoreceptor nuclei and retinal pigment epithelium integrity mainly in the cone-rich central retina, a phenotype with strong similarities to that of NPHP6 disease. Conclusions Mutations in NPHP5 can cause LCA without early-onset renal disease. Abnormalities observed in the photoreceptor outer segments (a cilial structure) may explain the severe visual loss in NPHP5-associated LCA. Clinical Relevance The persistence of central photoreceptor nuclei despite severe visual loss in NPHP5 disease is encouraging for future therapeutic interventions. PMID:21220633
Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Flombaum, Jonathan I
2015-08-01
Categorization with basic color terms is an intuitive and universal aspect of color perception. Yet research on visual working memory capacity has largely assumed that only continuous estimates within color space are relevant to memory. As a result, the influence of color categories on working memory remains unknown. We propose a dual content model of color representation in which color matches to objects that are either present (perception) or absent (memory) integrate category representations along with estimates of specific values on a continuous scale ("particulars"). We develop and test the model through 4 experiments. In a first experiment pair, participants reproduce a color target, both with and without a delay, using a recently influential estimation paradigm. In a second experiment pair, we use standard methods in color perception to identify boundary and focal colors in the stimulus set. The main results are that responses drawn from working memory are significantly biased away from category boundaries and toward category centers. Importantly, the same pattern of results is present without a memory delay. The proposed dual content model parsimoniously explains these results, and it should replace prevailing single content models in studies of visual working memory. More broadly, the model and the results demonstrate how the main consequence of visual working memory maintenance is the amplification of category related biases and stimulus-specific variability that originate in perception. (c) 2015 APA, all rights reserved).
Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.
Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus
2016-11-01
The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
2015-10-01
overview visualization to help clinicians identify patients that are changing and inserted these indices into the sepsis specific decision support...visualization, 4) Created a sepsis identification visualization tool to help clinicians identify patients headed for septic shock, and 5) Generated a...5 Sepsis Visualization
Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
A Notation for Rapid Specification of Information Visualization
ERIC Educational Resources Information Center
Lee, Sang Yun
2013-01-01
This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…
Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney
2014-01-01
Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports non-integration or late integration. Here, we report on a self-organizing neural network framework that addresses one aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In two simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report two experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like “The boy will eat the white…,” while viewing visual displays with objects like a white cake (i.e., a predictable direct object of “eat”), white car (i.e., an object not predicted by “eat,” but consistent with “white”), and distractors. Consistent with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context. PMID:24245535
Enlarged temporal integration window in schizophrenia indicated by the double-flash illusion.
Haß, Katharina; Sinke, Christopher; Reese, Tanya; Roy, Mandy; Wiswede, Daniel; Dillo, Wolfgang; Oranje, Bob; Szycik, Gregor R
2017-03-01
In the present study we were interested in the processing of audio-visual integration in schizophrenia compared to healthy controls. The amount of sound-induced double-flash illusions served as an indicator of audio-visual integration. We expected an altered integration as well as a different window of temporal integration for patients. Fifteen schizophrenia patients and 15 healthy volunteers matched for age and gender were included in this study. We used stimuli with eight different temporal delays (stimulus onset asynchronys (SOAs) 25, 50, 75, 100, 125, 150, 200 and 300 ms) to induce a double-flash illusion. Group differences and the widths of temporal integration windows were calculated on percentages of reported double-flash illusions. Patients showed significantly more illusions (ca. 36-44% vs. 9-16% in control subjects) for SOAs 150-300. The temporal integration window for control participants went from SOAs 25 to 200 whereas for patients integration was found across all included temporal delays. We found no significant relationship between the amount of illusions and either illness severity, chlorpromazine equivalent doses or duration of illness in patients. Our results are interpreted in favour of an enlarged temporal integration window for audio-visual stimuli in schizophrenia patients, which is consistent with previous research.
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Weighted integration of short-term memory and sensory signals in the oculomotor system.
Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe
2018-05-01
Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.
Defining Visual Communication for a Multi-Media World.
ERIC Educational Resources Information Center
Griffin, Michael
1991-01-01
Explores the role of visual communication and discusses its placement within journalism education. States that educators would do well to dissolve the arbitrary divisions that continue to obstruct a more integrated understanding of visual mass media. (MG)
Impaired Visual Motor Coordination in Obese Adults.
Gaul, David; Mat, Arimin; O'Shea, Donal; Issartel, Johann
2016-01-01
Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly ( p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability ( p < 0.05), and a larger amplitude ( p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.
Storage of features, conjunctions and objects in visual working memory.
Vogel, E K; Woodman, G F; Luck, S J
2001-02-01
Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.
Pathview Web: user friendly pathway visualization and data integration.
Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory
2017-07-03
Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Visual search for feature and conjunction targets with an attention deficit.
Arguin, M; Joanette, Y; Cavanagh, P
1993-01-01
Abstract Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.
Harley, H E; Roitblat, H L; Nachtigall, P E
1996-04-01
A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.
Jung, Ph; Doussard-Lefaucheux, S
2002-04-01
We report the case of a 25-years old woman with anti-epileptic drugs who presents a visual field defect similar to those described with vigabatrin even though she was successfully treated with valproic acid then carbamazepine without vigabatrin. The association with trichorrhexis nodosa, a hair disease sometimes associated with inherited perturbation of metabolism of urea cycle in which visual loss can occur, could suggest an aspecific inhibition of several aminotransferases which could explain different adverse effects of some anti-epileptic drugs (visual abnormalities, alopecia) perhaps in genetic predisposed patients.
ERIC Educational Resources Information Center
Brown, Ted; Murdolo, Yuki
2015-01-01
The "Developmental Test of Visual Perception-Third Edition" (DTVP-3) is a recent revision of the "Developmental Test of Visual Perception-Second Edition" (DTVP-2). The DTVP-3 is designed to assess the visual perceptual and/or visual-motor integration skills of children from 4 to 12 years of age. The test is standardized using…
Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M
2012-04-05
The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.
2012-01-01
Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257
Visual-auditory integration for visual search: a behavioral study in barn owls
Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram
2015-01-01
Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls’ heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam’s video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905
JPL Earth Science Center Visualization Multitouch Table
NASA Astrophysics Data System (ADS)
Kim, R.; Dodge, K.; Malhotra, S.; Chang, G.
2014-12-01
JPL Earth Science Center Visualization table is a specialized software and hardware to allow multitouch, multiuser, and remote display control to create seamlessly integrated experiences to visualize JPL missions and their remote sensing data. The software is fully GIS capable through time aware OGC WMTS using Lunar Mapping and Modeling Portal as the GIS backend to continuously ingest and retrieve realtime remote sending data and satellite location data. 55 inch and 82 inch unlimited finger count multitouch displays allows multiple users to explore JPL Earth missions and visualize remote sensing data through very intuitive and interactive touch graphical user interface. To improve the integrated experience, Earth Science Center Visualization Table team developed network streaming which allows table software to stream data visualization to near by remote display though computer network. The purpose of this visualization/presentation tool is not only to support earth science operation, but specifically designed for education and public outreach and will significantly contribute to STEM. Our presentation will include overview of our software, hardware, and showcase of our system.
Measuring temporal summation in visual detection with a single-photon source.
Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G
2017-11-01
Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or <30ms) while the number of photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shrestha, Jyoti Baba; Gnyawali, Subodh; Upadhyay, Madan Prasad
2012-12-01
To identify the causes of blindness and visual impairment among students in integrated schools for the blind in Nepal. A total of 778 students from all 67 integrated schools for the blind in Nepal were examined using the World Health Organization/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision during the study period of 3 years. Among 831 students enrolled in the schools, 778 (93.6%) participated in the study. Mean age of students examined was 13.7 years, and the male to female ratio was 1.4:1. Among the students examined, 85.9% were blind, 10% had severe visual impairment and 4.1% were visually impaired. The cornea (22.8%) was the most common anatomical site of visual impairment, its most frequent cause being vitamin A deficiency, followed by the retina (18.4%) and lens (17.6%). Hereditary and childhood factors were responsible for visual loss in 27.9% and 22.0% of students, respectively. Etiology could not be determined in 46% of cases. Overall, 40.9% of students had avoidable causes of visual loss. Vision could be improved to a level better than 6/60 in 3.6% of students refracted. More than one third of students were visually impaired for potentially avoidable reasons, indicating lack of eye health awareness and eye care services in the community. The cause of visual impairment remained unknown in a large number of students, which indicates the need for introduction of modern diagnostic tools.
Gutierrez, J Claudio; Chigerwe, Munashe; Ilkiw, Jan E; Youngblood, Patricia; Holladay, Steven D; Srivastava, Sakti
Spatial visualization ability refers to the human cognitive ability to form, retrieve, and manipulate mental models of spatial nature. Visual reasoning ability has been linked to spatial ability. There is currently limited information about how entry-level spatial and visual reasoning abilities may predict veterinary anatomy performance or may be enhanced with progression through the veterinary anatomy content in an integrated curriculum. The present study made use of two tests that measure spatial ability and one test that measures visual reasoning ability in veterinary students: Guay's Visualization of Views Test, adapted version (GVVT), the Mental Rotations Test (MRT), and Raven's Advanced Progressive Matrices Test, short form (RavenT). The tests were given to the entering class of veterinary students during their orientation week and at week 32 in the veterinary medical curriculum. Mean score on the MRT significantly increased from 15.2 to 20.1, and on the RavenT significantly increased from 7.5 to 8.8. When females only were evaluated, results were similar to the total class outcome; however, all three tests showed significant increases in mean scores. A positive correlation between the pre- and post-test scores was found for all three tests. The present results should be considered preliminary at best for associating anatomic learning in an integrated curriculum with spatial and visual reasoning abilities. Other components of the curriculum, for instance histology or physiology, could also influence the improved spatial visualization and visual reasoning test scores at week 32.
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
A spatially collocated sound thrusts a flash into awareness
Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta
2015-01-01
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126
The DiaCog: A Prototype Tool for Visualizing Online Dialog Games' Interactions
ERIC Educational Resources Information Center
Yengin, Ilker; Lazarevic, Bojan
2014-01-01
This paper proposes and explains the design of a prototype learning tool named the DiaCog. The DiaCog visualizes dialog interactions within an online dialog game by using dynamically created cognitive maps. As a purposefully designed tool for enhancing learning effectiveness the DiaCog might be applicable to dialogs at discussion boards within a…
Self-Study and Evaluation Guide/1979 Edition. Section D-16: Other Service Program.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
The self evaluation guide is explained to be designed for accreditation of services to blind and visually handicapped students in service programs for which the NAC (National Accreditation Council for Agencies Serving the Blind and Visually Handicapped) does not have specific program standards (such as radio reading services and library services).…
To See or Not to See: Analyzing Difficulties in Geometry from the Perspective of Visual Perception
ERIC Educational Resources Information Center
Gal, Hagar; Linchevski, Liora
2010-01-01
In this paper, we consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks. In order to analyze such difficulties, we take advantage of the following perspectives of VPR: (1) Perceptual…
ERIC Educational Resources Information Center
Rau, Martina A.
2018-01-01
To learn content knowledge in science, technology, engineering, and math domains, students need to make connections among visual representations. This article considers two kinds of connection-making skills: (1) "sense-making skills" that allow students to verbally explain mappings among representations and (2) "perceptual…
Visualizing and Understanding l'Hopital's Rule
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2017-01-01
This article uses dynamic software in Excel to demonstrate several ways in which graphical and numerical approaches can be introduced both to enhance student understanding of l'Hopital's Rule and to explain why the Rule actually works to give the "right" answers. One of the approaches used is to visualize what is happening by examining…
Holistic and Individualistic Evaluations of Digital Visual Art Portfolios: A Mixed Methods Study
ERIC Educational Resources Information Center
Cavill, William D., Jr.
2017-01-01
The high-stakes nature of portfolio evaluation makes it necessary to perform such assessments in the fairest and most equitable manner possible. Determining whether or not there is a difference between holistically and individualistically derived scores for digital visual art portfolios and how those differences can be explained was the focus of…
ICT Teachers' Acceptance of "Scratch" as Algorithm Visualization Software
ERIC Educational Resources Information Center
Saltan, Fatih; Kara, Mehmet
2016-01-01
This study aims to investigate the acceptance of ICT teachers pertaining to the use of Scratch as an Algorithm Visualization (AV) software in terms of perceived ease of use and perceived usefulness. An embedded mixed method research design was used in the study, in which qualitative data were embedded in quantitative ones and used to explain the…
Donald E. Zimmerman; Carol Akerelrea; Jane Kapler Smith; Garrett J. O' Keefe
2006-01-01
Natural-resource managers have used a variety of computer-mediated presentation methods to communicate management practices to diverse publics. We explored the effects of visualizing and animating predictions from mathematical models in computerized presentations explaining forest succession (forest growth and change through time), fire behavior, and management options...