Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition
Craddock, Matt; Lawson, Rebecca
2009-01-01
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685
Verifying visual properties in sentence verification facilitates picture recognition memory.
Pecher, Diane; Zanolie, Kiki; Zeelenberg, René
2007-01-01
According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
Logic Models: A Tool for Effective Program Planning, Collaboration, and Monitoring. REL 2014-025
ERIC Educational Resources Information Center
Kekahio, Wendy; Lawton, Brian; Cicchinelli, Louis; Brandon, Paul R.
2014-01-01
A logic model is a visual representation of the assumptions and theory of action that underlie the structure of an education program. A program can be a strategy for instruction in a classroom, a training session for a group of teachers, a grade-level curriculum, a building-level intervention, or a district-or statewide initiative. This guide, an…
Emerging Object Representations in the Visual System Predict Reaction Times for Categorization
Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.
2015-01-01
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634
Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence
Anton-Erxleben, Katharina; Carrasco, Marisa
2014-01-01
Attention allows us to select relevant sensory information for preferential processing. Behaviourally, it improves performance in various visual tasks. One prominent effect of attention is the modulation of performance in tasks that involve the visual system’s spatial resolution. Physiologically, attention modulates neuronal responses and alters the profile and position of receptive fields near the attended location. Here, we develop a hypothesis linking the behavioural and electrophysiological evidence. The proposed framework seeks to explain how these receptive field changes enhance the visual system’s effective spatial resolution and how the same mechanisms may also underlie attentional effects on the representation of spatial information. PMID:23422910
Enhancing long-term memory with stimulation tunes visual attention in one trial.
Reinhart, Robert M G; Woodman, Geoffrey F
2015-01-13
Scientists have long proposed that memory representations control the mechanisms of attention that focus processing on the task-relevant objects in our visual field. Modern theories specifically propose that we rely on working memory to store the object representations that provide top-down control over attentional selection. Here, we show that the tuning of perceptual attention can be sharply accelerated after 20 min of noninvasive brain stimulation over medial-frontal cortex. Contrary to prevailing theories of attention, these improvements did not appear to be caused by changes in the nature of the working memory representations of the search targets. Instead, improvements in attentional tuning were accompanied by changes in an electrophysiological signal hypothesized to index long-term memory. We found that this pattern of effects was reliably observed when we stimulated medial-frontal cortex, but when we stimulated posterior parietal cortex, we found that stimulation directly affected the perceptual processing of the search array elements, not the memory representations providing top-down control. Our findings appear to challenge dominant theories of attention by demonstrating that changes in the storage of target representations in long-term memory may underlie rapid changes in the efficiency with which humans can find targets in arrays of objects.
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Visual adaptation and face perception
Webster, Michael A.; MacLeod, Donald I. A.
2011-01-01
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Visual adaptation and face perception.
Webster, Michael A; MacLeod, Donald I A
2011-06-12
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.
Inter-area correlations in the ventral visual pathway reflect feature integration
Freeman, Jeremy; Donner, Tobias H.; Heeger, David J.
2011-01-01
During object perception, the brain integrates simple features into representations of complex objects. A perceptual phenomenon known as visual crowding selectively interferes with this process. Here, we use crowding to characterize a neural correlate of feature integration. Cortical activity was measured with functional magnetic resonance imaging, simultaneously in multiple areas of the ventral visual pathway (V1–V4 and the visual word form area, VWFA, which responds preferentially to familiar letters), while human subjects viewed crowded and uncrowded letters. Temporal correlations between cortical areas were lower for crowded letters than for uncrowded letters, especially between V1 and VWFA. These differences in correlation were retinotopically specific, and persisted when attention was diverted from the letters. But correlation differences were not evident when we substituted the letters with grating patches that were not crowded under our stimulus conditions. We conclude that inter-area correlations reflect feature integration and are disrupted by crowding. We propose that crowding may perturb the transformations between neural representations along the ventral pathway that underlie the integration of features into objects. PMID:21521832
Crottaz-Herbette, Sonia; Fornari, Eleonora; Notter, Michael P; Bindschaedler, Claire; Manzoni, Laura; Clarke, Stephanie
2017-09-01
Prismatic adaptation has been repeatedly reported to alleviate neglect symptoms; in normal subjects, it was shown to enhance the representation of the left visual space within the left inferior parietal cortex. Our study aimed to determine in humans whether similar compensatory mechanisms underlie the beneficial effect of prismatic adaptation in neglect. Fifteen patients with right hemispheric lesions and 11 age-matched controls underwent a prismatic adaptation session which was preceded and followed by fMRI using a visual detection task. In patients, the prismatic adaptation session improved the accuracy of target detection in the left and central space and enhanced the representation of this visual space within the left hemisphere in parts of the temporal convexity, inferior parietal lobule and prefrontal cortex. Across patients, the increase in neuronal activation within the temporal regions correlated with performance improvements in this visual space. In control subjects, prismatic adaptation enhanced the representation of the left visual space within the left inferior parietal lobule and decreased it within the left temporal cortex. Thus, a brief exposure to prismatic adaptation enhances, both in patients and in control subjects, the competence of the left hemisphere for the left space, but the regions extended beyond the inferior parietal lobule to the temporal convexity in patients. These results suggest that the left hemisphere provides compensatory mechanisms in neglect by assuming the representation of the whole space within the ventral attentional system. The rapidity of the change suggests that the underlying mechanism relies on uncovering pre-existing synaptic connections. Copyright © 2017 Elsevier Ltd. All rights reserved.
Saccade latency reveals episodic representation of object color.
Gordon, Robert D
2014-08-01
While previous studies suggest that identity, but not color, plays a role in episodic object representation, such studies have typically used tasks in which only identity is relevant, raising the possibility that the results reflect task demands, rather than the general principles that underlie object representation. In the present study, participants viewed a preview display containing one (Experiments 1 and 2) or two (Experiment 3) letters, then viewed a target display containing a single letter, in either the same or a different location. Participants executed an immediate saccade to fixate the target; saccade latency served as the dependent variable. In all experiments, saccade latencies were longer to fixate a target appearing in its previewed location, consistent with a bias to attend to new objects rather than to objects for which episodic representations are being maintained in visual working memory. The results of Experiment 3 further demonstrate, however, that changing target color eliminates these latency differences. The results suggest that color and identity are part of episodic representation even when not task relevant and that examining biases in saccade execution may be a useful approach to studying episodic representation.
Nestor, Adrian; Vettel, Jean M; Tarr, Michael J
2013-11-01
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.
Representing time in language and memory: the role of similarity structure.
Faber, Myrthe; Gennari, Silvia P
2015-03-01
Every day we read about or watch events in the world and can easily understand or remember how long they last. What aspects of an event are retained in memory? And how do we extract temporal information from our memory representations? These issues are central to human cognition, as they underlie a fundamental aspect of our mental life, namely our representation of time. This paper reviews previous language studies and reports a visual learning study indicating that properties of the events encoded in memory shape the representation of their duration. The evidence indicates that for a given event, the extent to which its associated properties or sub-components differ from one another modulates our representation of its duration. These properties include the similarity between sub-events and the similarity between the situational contexts in which an event occurs. We suggest that the diversity of representations that we associate with events in memory plays an important role in remembering and estimating the duration of experienced or described events. Copyright © 2014 Elsevier B.V. All rights reserved.
Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception
Su, Yi-Huang; Salazar-López, Elvira
2016-01-01
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900
Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.
Su, Yi-Huang; Salazar-López, Elvira
2016-01-01
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.
Normalization as a canonical neural computation
Carandini, Matteo; Heeger, David J.
2012-01-01
There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults.
Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D
2007-08-15
Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early-onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations.
Ipser, Alberta; Ring, Melanie; Murphy, Jennifer; Gaigg, Sebastian B; Cook, Richard
2016-05-01
Considerable research has addressed whether the cognitive and neural representations recruited by faces are similar to those engaged by other types of visual stimuli. For example, research has examined the extent to which objects of expertise recruit holistic representation and engage the fusiform face area. Little is known, however, about the domain-specificity of the exemplar pooling processes thought to underlie the acquisition of familiarity with particular facial identities. In the present study we sought to compare observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars. Crucially, while handwritten words and faces differ considerably in their topographic form, both learning tasks share a common exemplar pooling component. In our first experiment, we find that typical observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars correlates closely. In our second experiment, we show that observers with Autism Spectrum Disorder (ASD) are impaired at both learning tasks. Our findings suggest that similar exemplar pooling processes are recruited when learning facial identities and handwriting styles. Models of exemplar pooling originally developed to explain face learning, may therefore offer valuable insights into exemplar pooling across a range of domains, extending beyond faces. Aberrant exemplar pooling, possibly resulting from structural differences in the inferior longitudinal fasciculus, may underlie difficulties recognising familiar faces often experienced by individuals with ASD, and leave observers overly reliant on local details present in particular exemplars. Copyright © 2016 Elsevier Ltd. All rights reserved.
McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.
2014-01-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436
McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J
2015-02-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.
Retinotopic maps and foveal suppression in the visual cortex of amblyopic adults
Conner, Ian P; Odom, J Vernon; Schwartz, Terry L; Mendola, Janine D
2007-01-01
Amblyopia is a developmental visual disorder associated with loss of monocular acuity and sensitivity as well as profound alterations in binocular integration. Abnormal connections in visual cortex are known to underlie this loss, but the extent to which these abnormalities are regionally or retinotopically specific has not been fully determined. This functional magnetic resonance imaging (fMRI) study compared the retinotopic maps in visual cortex produced by each individual eye in 19 adults (7 esotropic strabismics, 6 anisometropes and 6 controls). In our standard viewing condition, the non-tested eye viewed a dichoptic homogeneous mid-level grey stimulus, thereby permitting some degree of binocular interaction. Regions-of-interest analysis was performed for extrafoveal V1, extrafoveal V2 and the foveal representation at the occipital pole. In general, the blood oxygenation level-dependent (BOLD) signal was reduced for the amblyopic eye. At the occipital pole, population receptive fields were shifted to represent more parafoveal locations for the amblyopic eye, compared with the fellow eye, in some subjects. Interestingly, occluding the fellow eye caused an expanded foveal representation for the amblyopic eye in one early–onset strabismic subject with binocular suppression, indicating real-time cortical remapping. In addition, a few subjects actually showed increased activity in parietal and temporal cortex when viewing with the amblyopic eye. We conclude that, even in a heterogeneous population, abnormal early visual experience commonly leads to regionally specific cortical adaptations. PMID:17627994
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Visual memory for moving scenes.
DeLucia, Patricia R; Maldia, Maria M
2006-02-01
In the present study, memory for picture boundaries was measured with scenes that simulated self-motion along the depth axis. The results indicated that boundary extension (a distortion in memory for picture boundaries) occurred with moving scenes in the same manner as that reported previously for static scenes. Furthermore, motion affected memory for the boundaries but this effect of motion was not consistent with representational momentum of the self (memory being further forward in a motion trajectory than actually shown). We also found that memory for the final position of the depicted self in a moving scene was influenced by properties of the optical expansion pattern. The results are consistent with a conceptual framework in which the mechanisms that underlie boundary extension and representational momentum (a) process different information and (b) both contribute to the integration of successive views of a scene while the scene is changing.
The Puzzle of Visual Development: Behavior and Neural Limits.
Kiorpes, Lynne
2016-11-09
The development of visual function takes place over many months or years in primate infants. Visual sensitivity is very poor near birth and improves over different times courses for different visual functions. The neural mechanisms that underlie these processes are not well understood despite many decades of research. The puzzle arises because research into the factors that limit visual function in infants has found surprisingly mature neural organization and adult-like receptive field properties in very young infants. The high degree of visual plasticity that has been documented during the sensitive period in young children and animals leaves the brain vulnerable to abnormal visual experience. Abnormal visual experience during the sensitive period can lead to amblyopia, a developmental disorder of vision affecting ∼3% of children. This review provides a historical perspective on research into visual development and the disorder amblyopia. The mismatch between the status of the primary visual cortex and visual behavior, both during visual development and in amblyopia, is discussed, and several potential resolutions are considered. It seems likely that extrastriate visual areas further along the visual pathways may set important limits on visual function and show greater vulnerability to abnormal visual experience. Analyses based on multiunit, population activity may provide useful representations of the information being fed forward from primary visual cortex to extrastriate processing areas and to the motor output. Copyright © 2016 the authors 0270-6474/16/3611384-10$15.00/0.
Perceptual learning: toward a comprehensive theory.
Watanabe, Takeo; Sasaki, Yuka
2015-01-03
Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.
Holistic processing is finely tuned for faces of one's own race.
Michel, Caroline; Rossion, Bruno; Han, Jaehyun; Chung, Chan-Sup; Caldara, Roberto
2006-07-01
Recognizing individual faces outside one's race poses difficulty, a phenomenon known as the other-race effect. Most researchers agree that this effect results from differential experience with same-race (SR) and other-race (OR) faces. However, the specific processes that develop with visual experience and underlie the other-race effect remain to be clarified. We tested whether the integration of facial features into a whole representation-holistic processing-was larger for SR than OR faces in Caucasians and Asians without life experience with OR faces. For both classes of participants, recognition of the upper half of a composite-face stimulus was more disrupted by the bottom half (the composite-face effect) for SR than OR faces, demonstrating that SR faces are processed more holistically than OR faces. This differential holistic processing for faces of different races, probably a by-product of visual experience, may be a critical factor in the other-race effect.
When Emotion Blinds: A Spatiotemporal Competition Account of Emotion-Induced Blindness
Wang, Lingling; Kennedy, Briana L.; Most, Steven B.
2012-01-01
Emotional visual scenes are such powerful attractors of attention that they can disrupt perception of other stimuli that appear soon afterward, an effect known as emotion-induced blindness. What mechanisms underlie this impact of emotion on perception? Evidence suggests that emotion-induced blindness may be distinguishable from closely related phenomena such as the orienting of spatial attention to emotional stimuli or the central resource bottlenecks commonly associated with the attentional blink. Instead, we suggest that emotion-induced blindness reflects relatively early competition between targets and emotional distractors, where spontaneous prioritization of emotional stimuli leads to suppression of competing perceptual representations potentially linked to an overlapping point in time and space. PMID:23162497
Structure Building Predicts Grades in College Psychology and Biology
ERIC Educational Resources Information Center
Arnold, Kathleen M.; Daniel, David B.; Jensen, Jamie L.; McDaniel, Mark A.; Marsh, Elizabeth J.
2016-01-01
Knowing what skills underlie college success can allow students, teachers, and universities to identify and to help at-risk students. One skill that may underlie success across a variety of subject areas is structure building, the ability to create mental representations of narratives (Gernsbacher, Varner, & Faust, 1990). We tested if…
Neural entrainment to rhythmic speech in children with developmental dyslexia
Power, Alan J.; Mead, Natasha; Barnes, Lisa; Goswami, Usha
2013-01-01
A rhythmic paradigm based on repetition of the syllable “ba” was used to study auditory, visual, and audio-visual oscillatory entrainment to speech in children with and without dyslexia using EEG. Children pressed a button whenever they identified a delay in the isochronous stimulus delivery (500 ms; 2 Hz delta band rate). Response power, strength of entrainment and preferred phase of entrainment in the delta and theta frequency bands were compared between groups. The quality of stimulus representation was also measured using cross-correlation of the stimulus envelope with the neural response. The data showed a significant group difference in the preferred phase of entrainment in the delta band in response to the auditory and audio-visual stimulus streams. A different preferred phase has significant implications for the quality of speech information that is encoded neurally, as it implies enhanced neuronal processing (phase alignment) at less informative temporal points in the incoming signal. Consistent with this possibility, the cross-correlogram analysis revealed superior stimulus representation by the control children, who showed a trend for larger peak r-values and significantly later lags in peak r-values compared to participants with dyslexia. Significant relationships between both peak r-values and peak lags were found with behavioral measures of reading. The data indicate that the auditory temporal reference frame for speech processing is atypical in developmental dyslexia, with low frequency (delta) oscillations entraining to a different phase of the rhythmic syllabic input. This would affect the quality of encoding of speech, and could underlie the cognitive impairments in phonological representation that are the behavioral hallmark of this developmental disorder across languages. PMID:24376407
Hogendoorn, Hinze; Burkitt, Anthony N
2018-05-01
Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.
Dissociation of quantifiers and object nouns in speech in focal neurodegenerative disease.
Ash, Sharon; Ternes, Kylie; Bisbing, Teagan; Min, Nam Eun; Moran, Eileen; York, Collin; McMillan, Corey T; Irwin, David J; Grossman, Murray
2016-08-01
Quantifiers such as many and some are thought to depend in part on the conceptual representation of number knowledge, while object nouns such as cookie and boy appear to depend in part on visual feature knowledge associated with object concepts. Further, number knowledge is associated with a frontal-parietal network while object knowledge is related in part to anterior and ventral portions of the temporal lobe. We examined the cognitive and anatomic basis for the spontaneous speech production of quantifiers and object nouns in non-aphasic patients with focal neurodegenerative disease associated with corticobasal syndrome (CBS, n=33), behavioral variant frontotemporal degeneration (bvFTD, n=54), and semantic variant primary progressive aphasia (svPPA, n=19). We recorded a semi-structured speech sample elicited from patients and healthy seniors (n=27) during description of the Cookie Theft scene. We observed a dissociation: CBS and bvFTD were significantly impaired in the production of quantifiers but not object nouns, while svPPA were significantly impaired in the production of object nouns but not quantifiers. MRI analysis revealed that quantifier production deficits in CBS and bvFTD were associated with disease in a frontal-parietal network important for number knowledge, while impaired production of object nouns in all patient groups was related to disease in inferior temporal regions important for representations of visual feature knowledge of objects. These findings imply that partially dissociable representations in semantic memory may underlie different segments of the lexicon. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W
2004-02-01
According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. Simulations are componential in the sense that they vary with the context in which the concept is presented. In the present study, we investigated whether representations are affected by recent experiences with a concept. Concept names (e.g., APPLE) were presented twice in a property verification task with a different property on each occasion. The two properties were either from the same perceptual modality (e.g., green, shiny) or from different modalities (e.g., tart, shiny). All stimuli were words. There was a lag of several intervening trials between the first and second presentation. Verification times and error rates for the second presentation of the concept were higher if the properties were from different modalities than if they were from the same modality.
Teachers' Practices and Mental Models: Transformation through Reflection on Action
ERIC Educational Resources Information Center
Manrique, María Soledad; Sánchez Abchi, Verónica
2015-01-01
This contribution explores the relationship between teaching practices, teaching discourses and teachers' implicit representations and mental models and the way these dimensions change through teacher education (T.E). In order to study these relationships, and based on the assumptions that representations underlie teaching practices and that T.E…
Representing metarepresentations: is there theory of mind-specific cognition?
Egeth, Marc; Kurzban, Robert
2009-03-01
What cognitive mechanisms underlie Theory of Mind? Some infer domain-specific Theory of Mind cognition based the pattern of children diagnosed with autism failing the False Belief test but passing the False Photograph test. However, we argue that the False Belief test entails various task demands the False Photograph task does not, including the necessity to represent a higher-order representation (a metarepresentation), thus confounding the inference of domain-specificity. Instead, a general difficulty that affects representations of metarepresentations might account for the seeming domain-specific failure. Here we find that False-Belief failing False-Photograph passing children fail the Meta Photograph test, a new photograph-domain test that requires subjects to represent a metarepresentation. We conclude that people who fail the False Belief test but pass the False Photograph test do not necessarily have a content-specific Theory of Mind deficit. Instead, the general ability to represent representations and metarepresentations might underlie Theory of Mind.
Dillon, Moira R.; Spelke, Elizabeth S.
2015-01-01
Research on animals, infants, children, and adults provides evidence that distinct cognitive systems underlie navigation and object recognition. Here we examine whether and how these systems interact when children interpret 2D edge-based perspectival line drawings of scenes and objects. Such drawings serve as symbols early in development, and they preserve scene and object geometry from canonical points of view. Young children show limits when using geometry both in non-symbolic tasks and in symbolic map tasks that present 3D contexts from unusual, unfamiliar points of view. When presented with the familiar viewpoints in perspectival line drawings, however, do children engage more integrated geometric representations? In three experiments, children successfully interpreted line drawings with respect to their depicted scene or object. Nevertheless, children recruited distinct processes when navigating based on the information in these drawings, and these processes depended on the context in which the drawings were presented. These results suggest that children are flexible but limited in using geometric information to form integrated representations of scenes and objects, even when interpreting spatial symbols that are highly familiar and faithful renditions of the visual world. PMID:25441089
Visual Memories Bypass Normalization.
Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam
2018-05-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.
Visual Memories Bypass Normalization
Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam
2018-01-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038
Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415
Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Time in the Mind: Using Space to Think about Time
ERIC Educational Resources Information Center
Casasanto, Daniel; Boroditsky, Lera
2008-01-01
How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people's more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a "long"…
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro
2012-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547
Olivetti Belardinelli, Marta; Santangelo, Valerio
2005-07-08
This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.
Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro
2013-01-01
Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.
Potential roles of cholinergic modulation in the neural coding of location and movement speed
Dannenberg, Holger; Hinman, James R.; Hasselmo, Michael E.
2016-01-01
Behavioral data suggest that cholinergic modulation may play a role in certain aspects of spatial memory, and neurophysiological data demonstrate neurons that fire in response to spatial dimensions, including grid cells and place cells that respond on the basis of location and running speed. These neurons show firing responses that depend upon the visual configuration of the environment, due to coding in visually-responsive regions of the neocortex. This review focuses on the physiological effects of acetylcholine that may influence the sensory coding of spatial dimensions relevant to behavior. In particular, the local circuit effects of acetylcholine within the cortex regulate the influence of sensory input relative to internal memory representations, via presynaptic inhibition of excitatory and inhibitory synaptic transmission, and the modulation of intrinsic currents in cortical excitatory and inhibitory neurons. In addition, circuit effects of acetylcholine regulate the dynamics of cortical circuits including oscillations at theta and gamma frequencies. These effects of acetylcholine on local circuits and network dynamics could underlie the role of acetylcholine in coding of spatial information for the performance of spatial memory tasks. PMID:27677935
Viewing the dynamics and control of visual attention through the lens of electrophysiology
Woodman, Geoffrey F.
2013-01-01
How we find what we are looking for in complex visual scenes is a seemingly simple ability that has taken half a century to unravel. The first study to use the term visual search showed that as the number of objects in a complex scene increases, observers’ reaction times increase proportionally (Green and Anderson, 1956). This observation suggests that our ability to process the objects in the scenes is limited in capacity. However, if it is known that the target will have a certain feature attribute, for example, that it will be red, then only an increase in the number of red items increases reaction time. This observation suggests that we can control which visual inputs receive the benefit of our limited capacity to recognize the objects, such as those defined by the color red, as the items we seek. The nature of the mechanisms that underlie these basic phenomena in the literature on visual search have been more difficult to definitively determine. In this paper, I discuss how electrophysiological methods have provided us with the necessary tools to understand the nature of the mechanisms that give rise to the effects observed in the first visual search paper. I begin by describing how recordings of event-related potentials from humans and nonhuman primates have shown us how attention is deployed to possible target items in complex visual scenes. Then, I will discuss how event-related potential experiments have allowed us to directly measure the memory representations that are used to guide these deployments of attention to items with target-defining features. PMID:23357579
Automated objective characterization of visual field defects in 3D
NASA Technical Reports Server (NTRS)
Fink, Wolfgang (Inventor)
2006-01-01
A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.
Flexible Coding of Visual Working Memory Representations during Distraction.
Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark
2018-06-06
Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.
Commonalities between Perception and Cognition.
Tacca, Michela C
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity - a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman's Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition.
Commonalities between Perception and Cognition
Tacca, Michela C.
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity – a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman’s Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition. PMID:22144974
Brief Report: Autism-Like Traits Are Associated with Enhanced Ability to Disembed Visual Forms
ERIC Educational Resources Information Center
Sabatino DiCriscio, Antoinette; Troiani, Vanessa
2017-01-01
Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of…
Cognitive Processes that Underlie Mathematical Precociousness in Young Children
ERIC Educational Resources Information Center
Swanson, H. Lee
2006-01-01
The working memory (WM) processes that underlie young children's (ages 6-8 years) mathematical precociousness were examined. A battery of tests that assessed components of WM (phonological loop, visual-spatial sketchpad, and central executive), naming speed, random generation, and fluency was administered to mathematically precocious and…
Enumeration of small collections violates Weber's law.
Choo, H; Franconeri, S L
2014-02-01
In a phenomenon called subitizing, we can immediately generate exact counts of small collections (one to three objects), in contrast to larger collections, for which we must either create rough estimates or serially count. A parsimonious explanation for this advantage for small collections is that noisy representations of small collections are more tolerable, due to the larger relative differences between consecutive numbers (e.g., 2 vs. 3 is a 50 % increase, but 10 vs. 11 is only a 10 % increase). In contrast, the advantage could stem from the fact that small-collection enumeration is more precise, relying on a unique mechanism. Here, we present two experiments that conclusively showed that the enumeration of small collections is indeed "superprecise." Participants compared numerosity within either small or large visual collections in conditions in which the relative differences were controlled (e.g., performance for 2 vs. 3 was compared with performance for 20 vs. 30). Small-number comparison was still faster and more accurate, across both "more-fewer" judgments (Exp. 1), and "same-different" judgments (Exp. 2). We then reviewed the remaining potential mechanisms that might underlie this superprecision for small collections, including the greater diagnostic value of visual features that correlate with number and a limited capacity for visually individuating objects.
Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load
ERIC Educational Resources Information Center
Yung, Hsin I.; Paas, Fred
2015-01-01
Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…
Object representations in visual memory: evidence from visual illusions.
Ben-Shalom, Asaf; Ganel, Tzvi
2012-07-26
Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.
[Visual representation of natural scenes in flicker changes].
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2010-08-01
Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
Activity in early visual areas predicts interindividual differences in binocular rivalry dynamics
Yamashiro, Hiroyuki; Mano, Hiroaki; Umeda, Masahiro; Higuchi, Toshihiro; Saiki, Jun
2013-01-01
When dissimilar images are presented to the two eyes, binocular rivalry (BR) occurs, and perception alternates spontaneously between the images. Although neural correlates of the oscillating perception during BR have been found in multiple sites along the visual pathway, the source of BR dynamics is unclear. Psychophysical and modeling studies suggest that both low- and high-level cortical processes underlie BR dynamics. Previous neuroimaging studies have demonstrated the involvement of high-level regions by showing that frontal and parietal cortices responded time locked to spontaneous perceptual alternation in BR. However, a potential contribution of early visual areas to BR dynamics has been overlooked, because these areas also responded to the physical stimulus alternation mimicking BR. In the present study, instead of focusing on activity during perceptual switches, we highlighted brain activity during suppression periods to investigate a potential link between activity in human early visual areas and BR dynamics. We used a strong interocular suppression paradigm called continuous flash suppression to suppress and fluctuate the visibility of a probe stimulus and measured retinotopic responses to the onset of the invisible probe using functional MRI. There were ∼130-fold differences in the median suppression durations across 12 subjects. The individual differences in suppression durations could be predicted by the amplitudes of the retinotopic activity in extrastriate visual areas (V3 and V4v) evoked by the invisible probe. Weaker responses were associated with longer suppression durations. These results demonstrate that retinotopic representations in early visual areas play a role in the dynamics of perceptual alternations during BR. PMID:24353304
Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.
Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M
2016-05-01
In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
Brewer, Alyssa A.; Barton, Brian
2012-01-01
Although several studies have suggested that cortical alterations underlie such age-related visual deficits as decreased acuity, little is known about what changes actually occur in visual cortex during healthy aging. Two recent studies showed changes in primary visual cortex (V1) during normal aging; however, no studies have characterized the effects of aging on visual cortex beyond V1, important measurements both for understanding the aging process and for comparison to changes in age-related diseases. Similarly, there is almost no information about changes in visual cortex in Alzheimer's disease (AD), the most common form of dementia. Because visual deficits are often reported as one of the first symptoms of AD, measurements of such changes in the visual cortex of AD patients might improve our understanding of how the visual system is affected by neurodegeneration as well as aid early detection, accurate diagnosis and timely treatment of AD. Here we use fMRI to first compare the visual field map (VFM) organization and population receptive fields (pRFs) between young adults and healthy aging subjects for occipital VFMs V1, V2, V3, and hV4. Healthy aging subjects do not show major VFM organizational deficits, but do have reduced surface area and increased pRF sizes in the foveal representations of V1, V2, and hV4 relative to healthy young control subjects. These measurements are consistent with behavioral deficits seen in healthy aging. We then demonstrate the feasibility and first characterization of these measurements in two patients with mild AD, which reveal potential changes in visual cortex as part of the pathophysiology of AD. Our data aid in our understanding of the changes in the visual processing pathways in normal aging and provide the foundation for future research into earlier and more definitive detection of AD. PMID:24570669
Beyond sensory images: Object-based representation in the human ventral pathway
Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.
2004-01-01
We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396
Visual Representations of DNA Replication: Middle Grades Students' Perceptions and Interpretations
ERIC Educational Resources Information Center
Patrick, Michelle D.; Carter, Glenda; Wiebe, Eric N.
2005-01-01
Visual representations play a critical role in the communication of science concepts for scientists and students alike. However, recent research suggests that novice students experience difficulty extracting relevant information from representations. This study examined students' interpretations of visual representations of DNA replication. Each…
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I
2005-05-01
Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.
Reorganization of Motor Cortex by Vagus Nerve Stimulation Requires Cholinergic Innervation.
Hulsey, Daniel R; Hays, Seth A; Khodaparast, Navid; Ruiz, Andrea; Das, Priyanka; Rennaker, Robert L; Kilgard, Michael P
2016-01-01
Vagus nerve stimulation (VNS) paired with forelimb training drives robust, specific reorganization of movement representations in the motor cortex. The mechanisms that underlie VNS-dependent enhancement of map plasticity are largely unknown. The cholinergic nucleus basalis (NB) is a critical substrate in cortical plasticity, and several studies suggest that VNS activates cholinergic circuitry. We examined whether the NB is required for VNS-dependent enhancement of map plasticity in the motor cortex. Rats were trained to perform a lever pressing task and then received injections of the immunotoxin 192-IgG-saporin to selectively lesion cholinergic neurons of the NB. After lesion, rats underwent five days of motor training during which VNS was paired with successful trials. At the conclusion of behavioral training, intracortical microstimulation was used to document movement representations in motor cortex. VNS paired with forelimb training resulted in a substantial increase in the representation of proximal forelimb in rats with an intact NB compared to untrained controls. NB lesions prevent this VNS-dependent increase in proximal forelimb area and result in representations similar to untrained controls. Motor performance was similar between groups, suggesting that differences in forelimb function cannot account for the difference in proximal forelimb representation. Together, these findings indicate that the NB is required for VNS-dependent enhancement of plasticity in the motor cortex and may provide insight into the mechanisms that underlie the benefits of VNS therapy. Copyright © 2016 Elsevier Inc. All rights reserved.
How the Human Brain Represents Perceived Dangerousness or “Predacity” of Animals
Sha, Long; Guntupalli, J. Swaroop; Oosterhof, Nikolaas; Halchenko, Yaroslav O.; Nastase, Samuel A.; di Oleggio Castello, Matteo Visconti; Abdi, Hervé; Jobst, Barbara C.; Gobbini, M. Ida; Haxby, James V.
2016-01-01
Common or folk knowledge about animals is dominated by three dimensions: (1) level of cognitive complexity or “animacy;” (2) dangerousness or “predacity;” and (3) size. We investigated the neural basis of the perceived dangerousness or aggressiveness of animals, which we refer to more generally as “perception of threat.” Using functional magnetic resonance imaging (fMRI), we analyzed neural activity evoked by viewing images of animal categories that spanned the dissociable semantic dimensions of threat and taxonomic class. The results reveal a distributed network for perception of threat extending along the right superior temporal sulcus. We compared neural representational spaces with target representational spaces based on behavioral judgments and a computational model of early vision and found a processing pathway in which perceived threat emerges as a dominant dimension: whereas visual features predominate in early visual cortex and taxonomy in lateral occipital and ventral temporal cortices, these dimensions fall away progressively from posterior to anterior temporal cortices, leaving threat as the dominant explanatory variable. Our results suggest that the perception of threat in the human brain is associated with neural structures that underlie perception and cognition of social actions and intentions, suggesting a broader role for these regions than has been thought previously, one that includes the perception of potential threat from agents independent of their biological class. SIGNIFICANCE STATEMENT For centuries, philosophers have wondered how the human mind organizes the world into meaningful categories and concepts. Today this question is at the core of cognitive science, but our focus has shifted to understanding how knowledge manifests in dynamic activity of neural systems in the human brain. This study advances the young field of empirical neuroepistemology by characterizing the neural systems engaged by an important dimension in our cognitive representation of the animal kingdom ontological subdomain: how the brain represents the perceived threat, dangerousness, or “predacity” of animals. Our findings reveal how activity for domain-specific knowledge of animals overlaps the social perception networks of the brain, suggesting domain-general mechanisms underlying the representation of conspecifics and other animals. PMID:27170133
Transformations in the Visual Representation of a Figural Pattern
ERIC Educational Resources Information Center
Montenegro, Paula; Costa, Cecília; Lopes, Bernardino
2018-01-01
Multiple representations of a given mathematical object/concept are one of the biggest difficulties encountered by students. The aim of this study is to investigate the impact of the use of visual representations in teaching and learning algebra. In this paper, we analyze the transformations from and to visual representations that were performed…
Low-level information and high-level perception: the case of speech in noise.
Nahum, Mor; Nelken, Israel; Ahissar, Merav
2008-05-20
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.
Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes.
Whiteley, Louise; Sahani, Maneesh
2008-03-06
Perception is an "inverse problem," in which the state of the world must be inferred from the sensory neural activity that results. However, this inference is both ill-posed (Helmholtz, 1856; Marr, 1982) and corrupted by noise (Green & Swets, 1989), requiring the brain to compute perceptual beliefs under conditions of uncertainty. Here we show that human observers performing a simple visual choice task under an externally imposed loss function approach the optimal strategy, as defined by Bayesian probability and decision theory (Berger, 1985; Cox, 1961). In concert with earlier work, this suggests that observers possess a model of their internal uncertainty and can utilize this model in the neural computations that underlie their behavior (Knill & Pouget, 2004). In our experiment, optimal behavior requires that observers integrate the loss function with an estimate of their internal uncertainty rather than simply requiring that they use a modal estimate of the uncertain stimulus. Crucially, they approach optimal behavior even when denied the opportunity to learn adaptive decision strategies based on immediate feedback. Our data thus support the idea that flexible representations of uncertainty are pre-existing, widespread, and can be propagated to decision-making areas of the brain.
Neural activity in cortical area V4 underlies fine disparity discrimination.
Shiozaki, Hiroshi M; Tanabe, Seiji; Doi, Takahiro; Fujita, Ichiro
2012-03-14
Primates are capable of discriminating depth with remarkable precision using binocular disparity. Neurons in area V4 are selective for relative disparity, which is the crucial visual cue for discrimination of fine disparity. Here, we investigated the contribution of V4 neurons to fine disparity discrimination. Monkeys discriminated whether the center disk of a dynamic random-dot stereogram was in front of or behind its surrounding annulus. We first behaviorally tested the reference frame of the disparity representation used for performing this task. After learning the task with a set of surround disparities, the monkey generalized its responses to untrained surround disparities, indicating that the perceptual decisions were generated from a disparity representation in a relative frame of reference. We then recorded single-unit responses from V4 while the monkeys performed the task. On average, neuronal thresholds were higher than the behavioral thresholds. The most sensitive neurons reached thresholds as low as the psychophysical thresholds. For subthreshold disparities, the monkeys made frequent errors. The variable decisions were predictable from the fluctuation in the neuronal responses. The predictions were based on a decision model in which each V4 neuron transmits the evidence for the disparity it prefers. We finally altered the disparity representation artificially by means of microstimulation to V4. The decisions were systematically biased when microstimulation boosted the V4 responses. The bias was toward the direction predicted from the decision model. We suggest that disparity signals carried by V4 neurons underlie precise discrimination of fine stereoscopic depth.
NASA Astrophysics Data System (ADS)
Cook, Michelle Patrick
2006-11-01
Visual representations are essential for communicating ideas in the science classroom; however, the design of such representations is not always beneficial for learners. This paper presents instructional design considerations providing empirical evidence and integrating theoretical concepts related to cognitive load. Learners have a limited working memory, and instructional representations should be designed with the goal of reducing unnecessary cognitive load. However, cognitive architecture alone is not the only factor to be considered; individual differences, especially prior knowledge, are critical in determining what impact a visual representation will have on learners' cognitive structures and processes. Prior knowledge can determine the ease with which learners can perceive and interpret visual representations in working memory. Although a long tradition of research has compared experts and novices, more research is necessary to fully explore the expert-novice continuum and maximize the potential of visual representations.
Texture-Based Correspondence Display
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael
2004-01-01
Texture-based correspondence display is a methodology to display corresponding data elements in visual representations of complex multidimensional, multivariate data. Texture is utilized as a persistent medium to contain a visual representation model and as a means to create multiple renditions of data where color is used to identify correspondence. Corresponding data elements are displayed over a variety of visual metaphors in a normal rendering process without adding extraneous linking metadata creation and maintenance. The effectiveness of visual representation for understanding data is extended to the expression of the visual representation model in texture.
A unified data representation theory for network visualization, ordering and coarse-graining
Kovács, István A.; Mizsei, Réka; Csermely, Péter
2015-01-01
Representation of large data sets became a key question of many scientific disciplines in the last decade. Several approaches for network visualization, data ordering and coarse-graining accomplished this goal. However, there was no underlying theoretical framework linking these problems. Here we show an elegant, information theoretic data representation approach as a unified solution of network visualization, data ordering and coarse-graining. The optimal representation is the hardest to distinguish from the original data matrix, measured by the relative entropy. The representation of network nodes as probability distributions provides an efficient visualization method and, in one dimension, an ordering of network nodes and edges. Coarse-grained representations of the input network enable both efficient data compression and hierarchical visualization to achieve high quality representations of larger data sets. Our unified data representation theory will help the analysis of extensive data sets, by revealing the large-scale structure of complex networks in a comprehensible form. PMID:26348923
Shared liking and association valence for representational art but not abstract art.
Schepman, Astrid; Rodway, Paul; Pullen, Sarah J; Kirkham, Julie
2015-01-01
We examined the finding that aesthetic evaluations are more similar across observers for representational images than for abstract images. It has been proposed that a difference in convergence of observers' tastes is due to differing levels of shared semantic associations (Vessel & Rubin, 2010). In Experiment 1, student participants rated 20 representational and 20 abstract artworks. We found that their judgments were more similar for representational than abstract artworks. In Experiment 2, we replicated this finding, and also found that valence ratings given to associations and meanings provided in response to the artworks converged more across observers for representational than for abstract art. Our empirical work provides insight into processes that may underlie the observation that taste for representational art is shared across individual observers, while taste for abstract art is more idiosyncratic.
Sensory Load Incurs Conceptual Processing Costs
ERIC Educational Resources Information Center
Vermeulen, Nicolas; Corneille, Olivier; Niedenthal, Paula M.
2008-01-01
Theories of grounded cognition propose that modal simulations underlie cognitive representation of concepts [Barsalou, L. W. (1999). "Perceptual symbol systems." "Behavioral and Brain Sciences, 22"(4), 577-660; Barsalou, L. W. (2008). "Grounded cognition." "Annual Review of Psychology, 59", 617-645]. Based…
Reading Visual Representations
ERIC Educational Resources Information Center
Rubenstein, Rheta N.; Thompson, Denisse R.
2012-01-01
Mathematics is rich in visual representations. Such visual representations are the means by which mathematical patterns "are recorded and analyzed." With respect to "vocabulary" and "symbols," numerous educators have focused on issues inherent in the language of mathematics that influence students' success with mathematics communication.…
NASA Astrophysics Data System (ADS)
Chen, Zhongzhou; Gladding, Gary
2014-06-01
Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition, which leads to a significant variance in their effectiveness. In this paper we propose a cognitive mechanism based on grounded cognition, suggesting that visual perception affects understanding by activating "perceptual symbols": the basic cognitive unit used by the brain to construct a concept. A good visual representation activates perceptual symbols that are essential for the construction of the represented concept, whereas a bad representation does the opposite. As a proof of concept, we conducted a clinical experiment in which participants received three different versions of a multimedia tutorial teaching the integral expression of electric potential. The three versions were only different by the details of the visual representation design, only one of which contained perceptual features that activate perceptual symbols essential for constructing the idea of "accumulation." On a following post-test, participants receiving this version of tutorial significantly outperformed those who received the other two versions of tutorials designed to mimic conventional visual representations used in classrooms.
Characterizing Interaction with Visual Mathematical Representations
ERIC Educational Resources Information Center
Sedig, Kamran; Sumner, Mark
2006-01-01
This paper presents a characterization of computer-based interactions by which learners can explore and investigate visual mathematical representations (VMRs). VMRs (e.g., geometric structures, graphs, and diagrams) refer to graphical representations that visually encode properties and relationships of mathematical structures and concepts.…
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory
ERIC Educational Resources Information Center
Coco, Moreno I.; Keller, Frank; Malcolm, George L.
2016-01-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…
View Combination: A Generalization Mechanism for Visual Recognition
ERIC Educational Resources Information Center
Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric
2011-01-01
We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…
Perceptual Processing Affects Conceptual Processing
ERIC Educational Resources Information Center
van Dantzig, Saskia; Pecher, Diane; Zeelenberg, Rene; Barsalou, Lawrence W.
2008-01-01
According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task…
Comparing Information Access Approaches.
ERIC Educational Resources Information Center
Chalmers, Matthew
1999-01-01
Presents a broad view of information access, drawing from philosophy and semiology in constructing a framework for comparative discussion that is used to examine the information representations that underlie four approaches to information access--information retrieval, workflow, collaborative filtering, and the path model. Contains 32 references.…
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Chen, Zhongzhou; Gladding, Gary
2014-01-01
Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…
Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco
2015-10-15
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.
Alvarez, George A.; Nakayama, Ken; Konkle, Talia
2016-01-01
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600
Statistically optimal perception and learning: from behavior to neural representations
Fiser, József; Berkes, Pietro; Orbán, Gergő; Lengyel, Máté
2010-01-01
Human perception has recently been characterized as statistical inference based on noisy and ambiguous sensory inputs. Moreover, suitable neural representations of uncertainty have been identified that could underlie such probabilistic computations. In this review, we argue that learning an internal model of the sensory environment is another key aspect of the same statistical inference procedure and thus perception and learning need to be treated jointly. We review evidence for statistically optimal learning in humans and animals, and reevaluate possible neural representations of uncertainty based on their potential to support statistically optimal learning. We propose that spontaneous activity can have a functional role in such representations leading to a new, sampling-based, framework of how the cortex represents information and uncertainty. PMID:20153683
V4 activity predicts the strength of visual short-term memory representations.
Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F
2009-06-10
Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate visual store, we believe that it reflects a weak form of VSTM with high capacity that exists alongside a strong but capacity-limited form of VSTM. In the present study, we isolated brain activity related to weak and strong VSTM representations using functional magnetic resonance imaging. We found that activity in visual cortical area V4 predicted the strength of VSTM representations; activity was low when there was no VSTM, medium when there was a weak VSTM representation regardless of whether this weak representation was available for report or not, and high when there was a strong VSTM representation. Altogether, this study suggests that the high capacity yet weak VSTM store is represented in visual parts of the brain. Allegedly, only some of these VSTM traces are amplified by parietal and frontal regions and as a consequence reside in traditional or strong VSTM. The additional weak VSTM representations remain available for conscious access and report when attention is redirected to them yet are overwritten as soon as new visual stimuli hit the eyes.
Drawing Connections across Conceptually Related Visual Representations in Science
ERIC Educational Resources Information Center
Hansen, Janice
2013-01-01
This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those…
Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning
ERIC Educational Resources Information Center
Rau, Martina A.
2017-01-01
Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…
Tactile mental body parts representation in obesity.
Scarpina, Federica; Castelnuovo, Gianluca; Molinari, Enrico
2014-12-30
Obese people׳s distortions in visually-based mental body-parts representations have been reported in previous studies, but other sensory modalities have largely been neglected. In the present study, we investigated possible differences in tactilely-based body-parts representation between an obese and a healthy-weight group; additionally we explore the possible relationship between the tactile- and the visually-based body representation. Participants were asked to estimate the distance between two tactile stimuli that were simultaneously administered on the arm or on the abdomen, in the absence of visual input. The visually-based body-parts representation was investigated by a visual imagery method in which subjects were instructed to compare the horizontal extension of body part pairs. According to the results, the obese participants overestimated the size of the tactilely-perceived distances more than the healthy-weight group when the arm, and not the abdomen, was stimulated. Moreover, they reported a lower level of accuracy than did the healthy-weight group when estimating horizontal distances relative to their bodies, confirming an inappropriate visually-based mental body representation. Our results imply that body representation disturbance in obese people is not limited to the visual mental domain, but it spreads to the tactilely perceived distances. The inaccuracy was not a generalized tendency but was body-part related. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Prosodic Phonological Representations Early in Visual Word Recognition
ERIC Educational Resources Information Center
Ashby, Jane; Martin, Andrea E.
2008-01-01
Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable…
2017-01-01
Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what” and “where” information in both pathways. Is the two-pathway distinction still valid? Here, we examined how goal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments on human participants (57% females), by manipulating whether color or shape was task-relevant and how they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions. We found that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however, tended to decrease as task-relevant and irrelevant features were more integrated, reflecting the well-known object-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual information representation. SIGNIFICANCE STATEMENT Visual information is thought to be processed in two distinctive pathways: the ventral pathway that processes “what” an object is and the dorsal pathway that processes “where” it is located. This view has been challenged by recent studies revealing the existence of “what” and “where” information in both pathways. Here, we found that goal-directed visual information processing differentially modulates shape-based object category representations in the two pathways. Whereas ventral representations are more invariant to the demand of the task, reflecting what an object is, dorsal representations are more adaptive, reflecting what we do with the object. Thus, despite the existence of “what” and “where” information in both pathways, visual representations may still differ fundamentally in the two pathways. PMID:28821655
Visual Representations on High School Biology, Chemistry, Earth Science, and Physics Assessments
ERIC Educational Resources Information Center
LaDue, Nicole D.; Libarkin, Julie C.; Thomas, Stephen R.
2015-01-01
The pervasive use of visual representations in textbooks, curricula, and assessments underscores their importance in K-12 science education. For example, visual representations figure prominently in the recent publication of the Next Generation Science Standards (NGSS Lead States in Next generation science standards: for states, by states.…
ERIC Educational Resources Information Center
Rau, Martina A.
2017-01-01
STEM instruction often uses visual representations. To benefit from these, students need to understand how representations show domain-relevant concepts. Yet, this is difficult for students. Prior research shows that physical representations (objects that students manipulate by hand) and virtual representations (objects on a computer screen that…
Uncovering Camouflage: Amygdala Activation Predicts Long-Term Memory of Induced Perceptual Insight
Ludmer, Rachel; Dudai, Yadin; Rubin, Nava
2012-01-01
What brain mechanisms underlie learning of new knowledge from single events? We studied encoding in long-term memory of a unique type of one-shot experience, induced perceptual insight. While undergoing an fMRI brain scan, participants viewed degraded images of real-world pictures where the underlying objects were hard to recognize (‘camouflage’), followed by brief exposures to the original images (‘solution’), which led to induced insight (“Aha!”). A week later, participants’ memory was tested; a solution image was classified as ‘remembered’ if detailed perceptual knowledge was elicited from the camouflage image alone. During encoding, subsequently remembered images enjoyed higher activity in mid-level visual cortex and medial frontal cortex, but most pronouncedly in the amygdala, whose activity could be used to predict which solutions will remain in long-term memory. Our findings extend the known roles of amygdala in memory to include promoting of long-term memory of the sudden reorganization of internal representations. PMID:21382558
Technical note: The Linked Paleo Data framework - a common tongue for paleoclimatology
NASA Astrophysics Data System (ADS)
McKay, Nicholas P.; Emile-Geay, Julien
2016-04-01
Paleoclimatology is a highly collaborative scientific endeavor, increasingly reliant on online databases for data sharing. Yet there is currently no universal way to describe, store and share paleoclimate data: in other words, no standard. Data standards are often regarded by scientists as mere technicalities, though they underlie much scientific and technological innovation, as well as facilitating collaborations between research groups. In this article, we propose a preliminary data standard for paleoclimate data, general enough to accommodate all the archive and measurement types encountered in a large international collaboration (PAGES 2k). We also introduce a vehicle for such structured data (Linked Paleo Data, or LiPD), leveraging recent advances in knowledge representation (Linked Open Data).The LiPD framework enables quick querying and extraction, and we expect that it will facilitate the writing of open-source community codes to access, analyze, model and visualize paleoclimate observations. We welcome community feedback on this standard, and encourage paleoclimatologists to experiment with the format for their own purposes.
Costa, Thiago L; Costa, Marcelo F; Magalhães, Adsson; Rêgo, Gabriel G; Nagy, Balázs V; Boggio, Paulo S; Ventura, Dora F
2015-02-19
Recent research suggests that V1 plays an active role in the judgment of size and distance. Nevertheless, no research has been performed using direct brain stimulation to address this issue. We used transcranial direct-current stimulation (tDCS) to directly modulate the early stages of cortical visual processing while measuring size and distance perception with a psychophysical scaling method of magnitude estimation in a repeated-measures design. The subjects randomly received anodal, cathodal, and sham tDCS in separate sessions starting with size or distance judgment tasks. Power functions were fit to the size judgment data, whereas logarithmic functions were fit to distance judgment data. Slopes and R(2) were compared with separate repeated-measures analyses of variance with two factors: task (size vs. distance) and tDCS (anodal vs. cathodal vs. sham). Anodal tDCS significantly decreased slopes, apparently interfering with size perception. No effects were found for distance perception. Consistent with previous studies, the results of the size task appeared to reflect a prothetic continuum, whereas the results of the distance task seemed to reflect a metathetic continuum. The differential effects of tDCS on these tasks may support the hypothesis that different physiological mechanisms underlie judgments on these two continua. The results further suggest the complex involvement of the early visual cortex in size judgment tasks that go beyond the simple representation of low-level stimulus properties. This supports predictive coding models and experimental findings that suggest that higher-order visual areas may inhibit incoming information from the early visual cortex through feedback connections when complex tasks are performed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex
Jeong, Su Keun
2016-01-01
The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642
Thaler, Lore; Todd, James T
2009-04-01
Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.
ERIC Educational Resources Information Center
Galyas, Lesley Crowell
2016-01-01
Understanding of visual representations is a pivotal skill necessary in science. These visual, verbal, and numeric representations are the crux of science discourses "by scientists, with students and the general public" (Pauwels, 2006, p.viii). Those who lack the understanding of these representations see it as a foreign language, one…
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Framework for Querying and Analysis of Evolving Graphs
ERIC Educational Resources Information Center
Moffitt, Vera Zaychik
2017-01-01
Graph representations underlie many modern computer applications, capturing the structure of such diverse networks as the Internet, personal associations, roads, sensors, and metabolic pathways. While the static structure of graphs is a well-explored field, a new emphasis is being placed on understanding and representing the way these networks…
ERIC Educational Resources Information Center
Walsh, Frank
This monograph synthesizes the laws and regulations that form the basis of the right to representation in the court of public opinion by all who would seek to influence public and private decisions. It expresses the framework of human and social values that underlie this constitutional freedom and that give public relations and other management…
Parent Emotion Representations and the Socialization of Emotion Regulation in the Family
ERIC Educational Resources Information Center
Meyer, Sara; Raikes, H. Abigail; Virmani, Elita A.; Waters, Sara; Thompson, Ross A.
2014-01-01
There is considerable knowledge of parental socialization processes that directly and indirectly influence the development of children's emotion self-regulation, but little understanding of the specific beliefs and values that underlie parents' socialization approaches. This study examined multiple aspects of parents' self-reported…
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
The Statistics of Visual Representation
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.
2002-01-01
The experience of retinex image processing has prompted us to reconsider fundamental aspects of imaging and image processing. Foremost is the idea that a good visual representation requires a non-linear transformation of the recorded (approximately linear) image data. Further, this transformation appears to converge on a specific distribution. Here we investigate the connection between numerical and visual phenomena. Specifically the questions explored are: (1) Is there a well-defined consistent statistical character associated with good visual representations? (2) Does there exist an ideal visual image? And (3) what are its statistical properties?
’What’ and ’Where’ in Visual Attention: Evidence from the Neglect Syndrome
1992-01-01
representations of the visual world, visual attention, and object representations. 24 Bauer, R. M., & Rubens, A. B. (1985). Agnosia . In K. M. Heilman, & E...visual information. Journal of Experimental Psychology: General, 1-1, 501-517. Farah, M. J. (1990). Visual Agnosia : Disorders of Object Recognition and
Shifting Attention within Memory Representations Involves Early Visual Areas
Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan
2012-01-01
Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165
Spinal cord injury affects the interplay between visual and sensorimotor representations of the body
Ionta, Silvio; Villiger, Michael; Jutzeler, Catherine R; Freund, Patrick; Curt, Armin; Gassert, Roger
2016-01-01
The brain integrates multiple sensory inputs, including somatosensory and visual inputs, to produce a representation of the body. Spinal cord injury (SCI) interrupts the communication between brain and body and the effects of this deafferentation on body representation are poorly understood. We investigated whether the relative weight of somatosensory and visual frames of reference for body representation is altered in individuals with incomplete or complete SCI (affecting lower limbs’ somatosensation), with respect to controls. To study the influence of afferent somatosensory information on body representation, participants verbally judged the laterality of rotated images of feet, hands, and whole-bodies (mental rotation task) in two different postures (participants’ body parts were hidden from view). We found that (i) complete SCI disrupts the influence of postural changes on the representation of the deafferented body parts (feet, but not hands) and (ii) regardless of posture, whole-body representation progressively deteriorates proportionally to SCI completeness. These results demonstrate that the cortical representation of the body is dynamic, responsive, and adaptable to contingent conditions, in that the role of somatosensation is altered and partially compensated with a change in the relative weight of somatosensory versus visual bodily representations. PMID:26842303
Embedded Data Representations.
Willett, Wesley; Jansen, Yvonne; Dragicevic, Pierre
2017-01-01
We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.
Visual Representations of the Water Cycle in Science Textbooks
ERIC Educational Resources Information Center
Vinisha, K.; Ramadas, J.
2013-01-01
Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…
ERIC Educational Resources Information Center
Evagorou, Maria; Erduran, Sibel; Mäntylä, Terhi
2015-01-01
Background: The use of visual representations (i.e., photographs, diagrams, models) has been part of science, and their use makes it possible for scientists to interact with and represent complex phenomena, not observable in other ways. Despite a wealth of research in science education on visual representations, the emphasis of such research has…
ERIC Educational Resources Information Center
Moreno, Roxana; Ozogul, Gamze; Reisslein, Martin
2011-01-01
In 3 experiments, we examined the effects of using concrete and/or abstract visual problem representations during instruction on students' problem-solving practice, near transfer, problem representations, and learning perceptions. In Experiments 1 and 2, novice students learned about electrical circuit analysis with an instructional program that…
Expertise Reversal for Iconic Representations in Science Visualizations
ERIC Educational Resources Information Center
Homer, Bruce D.; Plass, Jan L.
2010-01-01
The influence of prior knowledge and cognitive development on the effectiveness of iconic representations in science visualizations was examined. Middle and high school students (N = 186) were given narrated visualizations of two chemistry topics: Kinetic Molecular Theory (Day 1) and Ideal Gas Laws (Day 2). For half of the visualizations, iconic…
Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory
Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.
2013-01-01
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773
Newborn chickens generate invariant object representations at the onset of visual object experience
Wood, Justin N.
2013-01-01
To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain. PMID:23918372
Task alters category representations in prefrontal but not high-level visual cortex.
Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit
2017-07-15
A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tippett, Christine Diane
Scientific knowledge is constructed and communicated through a range of forms in addition to verbal language. Maps, graphs, charts, diagrams, formulae, models, and drawings are just some of the ways in which science concepts can be represented. Representational competence---an aspect of visual literacy that focuses on the ability to interpret, transform, and produce visual representations---is a key component of science literacy and an essential part of science reading and writing. To date, however, most research has examined learning from representations rather than learning with representations. This dissertation consisted of three distinct projects that were related by a common focus on learning from visual representations as an important aspect of scientific literacy. The first project was the development of an exploratory framework that is proposed for use in investigations of students constructing and interpreting multimedia texts. The exploratory framework, which integrates cognition, metacognition, semiotics, and systemic functional linguistics, could eventually result in a model that might be used to guide classroom practice, leading to improved visual literacy, better comprehension of science concepts, and enhanced science literacy because it emphasizes distinct aspects of learning with representations that can be addressed though explicit instruction. The second project was a metasynthesis of the research that was previously conducted as part of the Explicit Literacy Instruction Embedded in Middle School Science project (Pacific CRYSTAL, http://www.educ.uvic.ca/pacificcrystal). Five overarching themes emerged from this case-to-case synthesis: the engaging and effective nature of multimedia genres, opportunities for differentiated instruction using multimodal strategies, opportunities for assessment, an emphasis on visual representations, and the robustness of some multimodal literacy strategies across content areas. The third project was a mixed-methods verification study that was conducted to refine and validate the theoretical framework. This study examined middle school students' representational competence and focused on students' creation of visual representations such as labelled diagrams, a form of representation commonly found in science information texts and textbooks. An analysis of the 31 Grade 6 participants' representations and semistructured interviews revealed five themes, each of which supports one or more dimensions of the exploratory framework: participants' use of color, participants' choice of representation (form and function), participants' method of planning for representing, participants' knowledge of conventions, and participants' selection of information to represent. Together, the results of these three projects highlight the need for further research on learning with rather than learning from representations.
Sereno, Anne B.; Lehky, Sidney R.
2011-01-01
Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”). PMID:21344010
Lighten the Load: Scaffolding Visual Literacy in Biochemistry and Molecular Biology
Offerdahl, Erika G.; Arneson, Jessie B.; Byrne, Nicholas
2017-01-01
The development of scientific visual literacy has been identified as critical to the training of tomorrow’s scientists and citizens alike. Within the context of the molecular life sciences in particular, visual representations frequently incorporate various components, such as discipline-specific graphical and diagrammatic features, varied levels of abstraction, and spatial arrangements of visual elements to convey information. Visual literacy is achieved when an individual understands the various ways in which a discipline uses these components to represent a particular way of knowing. Owing to the complex nature of visual representations, the activities through which visual literacy is developed have high cognitive load. Cognitive load can be reduced by first helping students to become fluent with the discrete components of visual representations before asking them to simultaneously integrate these components to extract the intended meaning of a representation. We present a taxonomy for characterizing one component of visual representations—the level of abstraction—as a first step in understanding the opportunities afforded students to develop fluency. Further, we demonstrate how our taxonomy can be used to analyze course assessments and spur discussions regarding the extent to which the development of visual literacy skills is supported by instruction within an undergraduate biochemistry curriculum. PMID:28130273
Spatial Language, Visual Attention, and Perceptual Simulation
ERIC Educational Resources Information Center
Coventry, Kenny R.; Lynott, Dermot; Cangelosi, Angelo; Monrouxe, Lynn; Joyce, Dan; Richardson, Daniel C.
2010-01-01
Spatial language descriptions, such as "The bottle is over the glass", direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on…
Kraehenmann, Rainer; Schmidt, André; Friston, Karl; Preller, Katrin H; Seifritz, Erich; Vollenweider, Franz X
2016-01-01
Stimulation of serotonergic neurotransmission by psilocybin has been shown to shift emotional biases away from negative towards positive stimuli. We have recently shown that reduced amygdala activity during threat processing might underlie psilocybin's effect on emotional processing. However, it is still not known whether psilocybin modulates bottom-up or top-down connectivity within the visual-limbic-prefrontal network underlying threat processing. We therefore analyzed our previous fMRI data using dynamic causal modeling and used Bayesian model selection to infer how psilocybin modulated effective connectivity within the visual-limbic-prefrontal network during threat processing. First, both placebo and psilocybin data were best explained by a model in which threat affect modulated bidirectional connections between the primary visual cortex, amygdala, and lateral prefrontal cortex. Second, psilocybin decreased the threat-induced modulation of top-down connectivity from the amygdala to primary visual cortex, speaking to a neural mechanism that might underlie putative shifts towards positive affect states after psilocybin administration. These findings may have important implications for the treatment of mood and anxiety disorders.
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha
2011-01-01
The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…
Magnani, Barbara; Frassinetti, Francesca; Ditye, Thomas; Oliveri, Massimiliano; Costantini, Marcello; Walsh, Vincent
2014-05-15
Prismatic adaptation (PA) has been shown to affect left-to-right spatial representations of temporal durations. A leftward aftereffect usually distorts time representation toward an underestimation, while rightward aftereffect usually results in an overestimation of temporal durations. Here, we used functional magnetic resonance imaging (fMRI) to study the neural mechanisms that underlie PA effects on time perception. Additionally, we investigated whether the effect of PA on time is transient or stable and, in the case of stability, which cortical areas are responsible of its maintenance. Functional brain images were acquired while participants (n=17) performed a time reproduction task and a control-task before, immediately after and 30 min after PA inducing a leftward aftereffect, administered outside the scanner. The leftward aftereffect induced an underestimation of time intervals that lasted for at least 30 min. The left anterior insula and the left superior frontal gyrus showed increased functional activation immediately after versus before PA in the time versus the control-task, suggesting these brain areas to be involved in the executive spatial manipulation of the representation of time. The left middle frontal gyrus showed an increase of activation after 30 min with respect to before PA. This suggests that this brain region may play a key role in the maintenance of the PA effect over time. Copyright © 2014. Published by Elsevier Inc.
[Sociophysiology: basic processes of empathy].
Haker, Helene; Schimansky, Jenny; Rössler, Wulf
2010-01-01
The aim of this review is to describe sociophysiological and social cognitive processes that underlie the complex phenomenon of human empathy. Automatic reflexive processes such as physiological contagion and action mirroring are mediated by the mirror neuron system. They are a basis for further processing of social signals and a physiological link between two individuals. This link comprises simultaneous activation of shared motor representations. Shared representations lead implicitly via individual associations in the limbic and vegetative system to a shared affective state. These processes are called sociophysiology. Further controlled- reflective, self-referential processing of those social signals leads to explicit, conscious representations of others' minds. Those higher-order processes are called social cognition. The interaction of physiological and cognitive social processes lets arise the phenomenon of human empathy.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
Revisiting Curriculum Inquiry: The Role of Visual Representations
ERIC Educational Resources Information Center
Eilam, Billie; Ben-Peretz, Miriam
2010-01-01
How do visual representations (VRs) in curriculum materials influence theoretical curriculum frameworks? Suggesting that VRs' integration into curriculum materials affords a different lens for perceiving and understanding the curriculum domain, this study draws on a curricular perspective in relation to multi-representations in texts rather than…
Xie, Weizhen; Zhang, Weiwei
2017-11-01
The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.
NASA Astrophysics Data System (ADS)
López, Víctor; Pintó, Roser
2017-07-01
Computer simulations are often considered effective educational tools, since their visual and communicative power enable students to better understand physical systems and phenomena. However, previous studies have found that when students read visual representations some reading difficulties can arise, especially when these are complex or dynamic representations. We have analyzed how secondary-school students read the visual representations displayed in two PhET simulations (one addressing the friction-heating at microscopic level, and the other addressing the electromagnetic induction), and different typologies of reading difficulties have been identified: when reading the compositional structure of the representation, when giving appropriate relevance and semantic meaning to each visual element, and also when dealing with multiple representations and dynamic information. All students experienced at least one of these difficulties, and very similar difficulties appeared in the two groups of students, despite the different scientific content of the simulations. In conclusion, visualisation does not imply a full comprehension of the content of scientific simulations per se, and an effective reading process requires a set of reading skills, previous knowledge, attention, and external supports. Science teachers should bear in mind these issues in order to help students read images to take benefit of their educational potential.
Träff, Ulf
2013-10-01
This study examined the relative contributions of general cognitive abilities and number abilities to word problem solving, calculation, and arithmetic fact retrieval in a sample of 134 children aged 10 to 13 years. The following tasks were administered: listening span, visual matrix span, verbal fluency, color naming, Raven's Progressive Matrices, enumeration, number line estimation, and digit comparison. Hierarchical multiple regressions demonstrated that number abilities provided an independent contribution to fact retrieval and word problem solving. General cognitive abilities contributed to problem solving and calculation. All three number tasks accounted for a similar amount of variance in fact retrieval, whereas only the number line estimation task contributed unique variance in word problem solving. Verbal fluency and Raven's matrices accounted for an equal amount of variance in problem solving and calculation. The current findings demonstrate, in accordance with Fuchs and colleagues' developmental model of mathematical learning (Developmental Psychology, 2010, Vol. 46, pp. 1731-1746), that both number abilities and general cognitive abilities underlie 10- to 13-year-olds' proficiency in problem solving, whereas only number abilities underlie arithmetic fact retrieval. Thus, the amount and type of cognitive contribution to arithmetic proficiency varies between the different aspects of arithmetic. Furthermore, how closely linked a specific aspect of arithmetic is to the whole number representation systems is not the only factor determining the amount and type of cognitive contribution in 10- to 13-year-olds. In addition, the mathematical complexity of the task appears to influence the amount and type of cognitive support. Copyright © 2013 Elsevier Inc. All rights reserved.
Different categories of living and non-living sound-sources activate distinct cortical networks
Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.
2009-01-01
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134
Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri
2015-02-19
The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.
Visualization of diversity in large multivariate data sets.
Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald
2010-01-01
Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.
Drawing Connections Across Conceptually Related Visual Representations in Science
NASA Astrophysics Data System (ADS)
Hansen, Janice
This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those representations is most effective for learning? And, 3) Can children's ability to process conceptually related science diagrams be enhanced with added support? Three groups of participants, 89 pre-service teachers, 211 adult non-educators, and 385 middle school children, were surveyed about whether they felt related visual representations presented serially or simultaneously would lead to better learning outcomes. Two experiments, one with adults and one with child participants, explored the validity of these beliefs. Pre-service teachers did not endorse either serial or simultaneous related visual representations for their own learning. They were, however, significantly more likely to indicate that children would learn better from serially presented diagrams. In direct contrast to the educators, middle school students believed they would learn better from related visual representations presented simultaneously. Experimental data indicated that the beliefs adult non-educators held about their own learning needs matched learning outcomes. These participants endorsed simultaneous presentation of related diagrams for their own learning. When comparing learning from related diagrams presented simultaneously to learning from the same diagrams presented serially indicate that those in the simultaneously condition were able to create more complex mental models. A second experiment compared children's learning from related diagrams across four randomly-assigned conditions: serial, simultaneous, simultaneous with signaling, and simultaneous with structure mapping support. Providing middle school students with simultaneous related diagrams with support for structure mapping led to a lessened reliance on surface features, and a better understanding of the science concepts presented. These findings suggest that presenting diagrams serially in an effort to reduce cognitive load may not be preferable for learning if making connections across representations, and by extension across science concepts, is desired. Instead, providing simultaneous diagrams with structure mapping support may result in greater attention to the salient relationships between related visual representations as well as between the representations and the science concepts they depict.
Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M
2017-11-01
The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
ERIC Educational Resources Information Center
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
Educating "The Simpsons": Teaching Queer Representations in Contemporary Visual Media
ERIC Educational Resources Information Center
Padva, Gilad
2008-01-01
This article analyzes queer representation in contemporary visual media and examines how the episode "Homer's Phobia" from Matt Groening's animation series "The Simpsons" can be used to deconstruct hetero- and homo-sexual codes of behavior, socialization, articulation, representation and visibility. The analysis is contextualized in the…
ERIC Educational Resources Information Center
Sedig, Kamran; Liang, Hai-Ning
2006-01-01
Computer-based mathematical cognitive tools (MCTs) are a category of external aids intended to support and enhance learning and cognitive processes of learners. MCTs often contain interactive visual mathematical representations (VMRs), where VMRs are graphical representations that encode properties and relationships of mathematical concepts. In…
Advances in visual representation of molecular potentials.
Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen
2010-06-01
The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.
Visual Semiotics & Uncertainty Visualization: An Empirical Study.
MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M
2012-12-01
This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.
Early access to abstract representations in developing readers: evidence from masked priming.
Perea, Manuel; Mallouh, Reem Abu; Carreiras, Manuel
2013-07-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing - as measured by masked priming - in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.- [ktz b-ktA b] - note that the three initial letters are connected in prime and target) than for those that do not (- [ktxb-ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. -) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. © 2013 Blackwell Publishing Ltd.
Early access to abstract representations in developing readers: Evidence from masked priming
Perea, Manuel; Abu Mallouh, Reem; Carreiras, Manuel
2013-01-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing –as measured by masked priming– in young children (3rd and 6th graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early moments of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word’s letters) as the target word (e.g., - [ktzb-ktAb] –note that the three initial letters are connected in prime and target) than for those that do not ( [ktxb-ktAb]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g., ) was remarkably similar for both types of primes. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. PMID:23786474
The “Visual Shock” of Francis Bacon: an essay in neuroesthetics
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812
The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.
Effects of gravitational and optical stimulation on the perception of target elevation
NASA Technical Reports Server (NTRS)
Cohen, M. M.; Stoper, A. E.; Welch, R. B.; DeRoshia, C. W.
2001-01-01
To examine the combined effects of gravitational and optical stimulation on perceived target elevation, we independently altered gravitational-inertial force and both the orientation and the structure of a background visual array. While being exposed to 1.0, 1.5, or 2.0 Gz in the human centrifuge at NASA Ames Research Center, observers attempted to set a target to the apparent horizon. The target was viewed against the far wall of a box that was pitched at various angles. The box was brightly illuminated, had only its interior edges dimly illuminated, or was kept dark. Observers lowered their target settings as Gz was increased; this effect was weakened when the box was illuminated. Also, when the box was visible, settings were displaced in the same direction as that in which the box was pitched. We attribute our results to the combined influence of otolith-oculomotor mechanisms that underlie the elevator illusion and visual-oculomotor mechanisms (optostatic responses) that underlie the perceptual effects of viewing pitched visual arrays.
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
Poor Phonemic Discrimination Does Not Underlie Poor Verbal Short-Term Memory in Down Syndrome
ERIC Educational Resources Information Center
Purser, Harry R. M.; Jarrold, Christopher
2013-01-01
Individuals with Down syndrome tend to have a marked impairment of verbal short-term memory. The chief aim of this study was to investigate whether phonemic discrimination contributes to this deficit. The secondary aim was to investigate whether phonological representations are degraded in verbal short-term memory in people with Down syndrome…
Remembering Kinds: New Evidence that Categories are Privileged in Children's Thinking
ERIC Educational Resources Information Center
Cimpian, Andrei; Erickson, Lucy C.
2012-01-01
What are the representations and learning mechanisms that underlie conceptual development? The present research provides evidence in favor of the claim that this process is guided by an early-emerging predisposition to think and learn about abstract kinds. Specifically, three studies (N=192) demonstrated that 4- to 7-year-old children have better…
ERIC Educational Resources Information Center
Benjamin, Aaron S.
2010-01-01
It is widely assumed that older adults suffer a deficit in the psychological processes that underlie remembering of contextual or source information. This conclusion is based in large part on empirical interactions, including disordinal ones, that reveal differential effects of manipulations of memory strength on recognition in young and old…
The changing demographic, legal, and technological contexts of political representation.
Forest, Benjamin
2005-10-25
Three developments have created challenges for political representation in the U.S. and particularly for the use of territorially based representation (election by district). First, the demographic complexity of the U.S. population has grown both in absolute terms and in terms of residential patterns. Second, legal developments since the 1960s have recognized an increasing number of groups as eligible for voting rights protection. Third, the growing technical capacities of computer technology, particularly Geographic Information Systems, have allowed political parties and other organizations to create election districts with increasingly precise political and demographic characteristics. Scholars have made considerable progress in measuring and evaluating the racial and partisan biases of districting plans, and some states have tried to use Geographic Information Systems technology to produce more representative districts. However, case studies of Texas and Arizona illustrate that such analytic and technical advances have not overcome the basic contradictions that underlie the American system of territorial political representation.
Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.
2016-01-01
Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688
Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study
ERIC Educational Resources Information Center
Farzin, Faraz; Rivera, Susan M.; Hessl, David
2009-01-01
Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…
Neural pathways for visual speech perception
Bernstein, Lynne E.; Liebenthal, Einat
2014-01-01
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611
A review of visual memory capacity: Beyond individual items and towards structured representations
Brady, Timothy F.; Konkle, Talia; Alvarez, George A.
2012-01-01
Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system - going beyond quantifying how many items can be remembered, and moving towards structured representations - but how we model memory systems and memory processes. PMID:21617025
Visual learning with reduced adaptation is eccentricity-specific.
Harris, Hila; Sagi, Dov
2018-01-12
Visual learning is known to be specific to the trained target location, showing little transfer to untrained locations. Recently, learning was shown to transfer across equal-eccentricity retinal-locations when sensory adaptation due to repetitive stimulation was minimized. It was suggested that learning transfers to previously untrained locations when the learned representation is location invariant, with sensory adaptation introducing location-dependent representations, thus preventing transfer. Spatial invariance may also fail when the trained and tested locations are at different distance from the center of gaze (different retinal eccentricities), due to differences in the corresponding low-level cortical representations (e.g. allocated cortical area decreases with eccentricity). Thus, if learning improves performance by better classifying target-dependent early visual representations, generalization is predicted to fail when locations of different retinal eccentricities are trained and tested in the absence sensory adaptation. Here, using the texture discrimination task, we show specificity of learning across different retinal eccentricities (4-8°) using reduced adaptation training. The existence of generalization across equal-eccentricity locations but not across different eccentricities demonstrates that learning accesses visual representations preceding location independent representations, with specificity of learning explained by inhomogeneous sensory representation.
Visual Perception of Force: Comment on White (2012)
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2012-01-01
White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account…
ERIC Educational Resources Information Center
Wilder, Anna; Brinkerhoff, Jonathan
2007-01-01
This study assessed the effectiveness of computer-based biomolecular visualization activities on the development of high school biology students' representational competence as a means of understanding and visualizing protein structure/function relationships. Also assessed were students' attitudes toward these activities. Sixty-nine students…
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Lighten the Load: Scaffolding Visual Literacy in Biochemistry and Molecular Biology.
Offerdahl, Erika G; Arneson, Jessie B; Byrne, Nicholas
2017-01-01
The development of scientific visual literacy has been identified as critical to the training of tomorrow's scientists and citizens alike. Within the context of the molecular life sciences in particular, visual representations frequently incorporate various components, such as discipline-specific graphical and diagrammatic features, varied levels of abstraction, and spatial arrangements of visual elements to convey information. Visual literacy is achieved when an individual understands the various ways in which a discipline uses these components to represent a particular way of knowing. Owing to the complex nature of visual representations, the activities through which visual literacy is developed have high cognitive load. Cognitive load can be reduced by first helping students to become fluent with the discrete components of visual representations before asking them to simultaneously integrate these components to extract the intended meaning of a representation. We present a taxonomy for characterizing one component of visual representations-the level of abstraction-as a first step in understanding the opportunities afforded students to develop fluency. Further, we demonstrate how our taxonomy can be used to analyze course assessments and spur discussions regarding the extent to which the development of visual literacy skills is supported by instruction within an undergraduate biochemistry curriculum. © 2017 E. G. Offerdahl et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
ERIC Educational Resources Information Center
Taylor, Roger S.; Grundstrom, Erika D.
2011-01-01
Given that astronomy heavily relies on visual representations it is especially likely for individuals to assume that instructional materials, such as visual representations of the Earth-Moon system (EMS), would be relatively accurate. However, in our research, we found that images in middle-school textbooks and educational webpages were commonly…
ERIC Educational Resources Information Center
Cook, Michelle Patrick
2006-01-01
Visual representations are essential for communicating ideas in the science classroom; however, the design of such representations is not always beneficial for learners. This paper presents instructional design considerations providing empirical evidence and integrating theoretical concepts related to cognitive load. Learners have a limited…
The loss of short-term visual representations over time: decay or temporal distinctiveness?
Mercer, Tom
2014-12-01
There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Recent Advances in Visualizing 3D Flow with LIC
NASA Technical Reports Server (NTRS)
Interrante, Victoria; Grosch, Chester
1998-01-01
Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.
The Process of Probability Problem Solving: Use of External Visual Representations
ERIC Educational Resources Information Center
Zahner, Doris; Corter, James E.
2010-01-01
We investigate the role of external inscriptions, particularly those of a spatial or visual nature, in the solution of probability word problems. We define a taxonomy of external visual representations used in probability problem solving that includes "pictures," "spatial reorganization of the given information," "outcome listings," "contingency…
Rethinking Reader Response with Fifth Graders' Semiotic Interpretations
ERIC Educational Resources Information Center
Barone, Diane; Barone, Rebecca
2017-01-01
Fifth graders interpreted the book "Doll Bones" by Holly Black through visual representations from the beginning to the end of the book. Each visual representation was analyzed to determine how students responded. Most frequently, they moved to inferential ways of understanding. Students often visually interpreted emotional plot elements…
Visual Representation of Rational Belief Revision: Another Look at the Sleeping Beauty Problem
2014-10-29
Retamero and Cokely , 2013). Visual representation is thought to facilitate performance by externalizing the set-subset relations among observa- tional... Cokely , E. T. (2013). Communicating health risks with visual aids. Curr. Dir. Psychol. Sci. 22, 392–399. doi: 10.1177/0963721413491570 Horgan, T. (2004
ERIC Educational Resources Information Center
López, Víctor; Pintó, Roser
2017-01-01
Computer simulations are often considered effective educational tools, since their visual and communicative power enable students to better understand physical systems and phenomena. However, previous studies have found that when students read visual representations some reading difficulties can arise, especially when these are complex or dynamic…
An Inquiry into the Nature of Uncle Joe's Representation and Meaning.
ERIC Educational Resources Information Center
Muffoletto, Robert
2001-01-01
Addresses a "critical" or "reflective" visual literacy. Situates visual representations and their interpretation (the construction of meaning) within a context that raises questions about benefit and power. Explores four main topics: the image as text; analysis and meaning construction; visual literacy as a liberatory practice;…
Converging Modalities Ground Abstract Categories: The Case of Politics
Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.
2013-01-01
Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360
Converging modalities ground abstract categories: the case of politics.
Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R
2013-01-01
Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.
Anticipatory Smooth Eye Movements in Autism Spectrum Disorder
Aitkin, Cordelia D.; Santos, Elio M.; Kowler, Eileen
2013-01-01
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations. PMID:24376667
Anticipatory smooth eye movements in autism spectrum disorder.
Aitkin, Cordelia D; Santos, Elio M; Kowler, Eileen
2013-01-01
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.
Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?
Reichert, David P.; Seriès, Peggy; Storkey, Amos J.
2013-01-01
Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain. PMID:23874177
The effect of emergent features on judgments of quantity in configural and separable displays.
Peebles, David
2008-06-01
Two experiments investigated effects of emergent features on perceptual judgments of comparative magnitude in three diagrammatic representations: kiviat charts, bar graphs, and line graphs. Experiment 1 required participants to compare individual values; whereas in Experiment 2 participants had to integrate several values to produce a global comparison. In Experiment 1, emergent features of the diagrams resulted in significant distortions of magnitude judgments, each related to a common geometric illusion. Emergent features are also widely believed to underlie the general superiority of configural displays, such as kiviat charts, for tasks requiring the integration of information. Experiment 2 tested the extent of this benefit using diagrams with a wide range of values. Contrary to the results of previous studies, the configural display produced the poorest performance compared to the more separable displays. Moreover, the pattern of responses suggests that kiviat users switched from an integration strategy to a sequential one depending on the shape of the diagram. The experiments demonstrate the powerful interaction between emergent visual properties and cognition and reveal limits to the benefits of configural displays for integration tasks. (c) 2008 APA, all rights reserved
Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.
Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward
2016-08-03
Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.
NASA Astrophysics Data System (ADS)
Allen, Emily Christine
Mental models for scientific learning are often defined as, "cognitive tools situated between experiments and theories" (Duschl & Grandy, 2012). In learning, these cognitive tools are used to not only take in new information, but to help problem solve in new contexts. Nancy Nersessian (2008) describes a mental model as being "[loosely] characterized as a representation of a system with interactive parts with representations of those interactions. Models can be qualitative, quantitative, and/or simulative (mental, physical, computational)" (p. 63). If conceptual parts used by the students in science education are inaccurate, then the resulting model will not be useful. Students in college general chemistry courses are presented with multiple abstract topics and often struggle to fit these parts into complete models. This is especially true for topics that are founded on quantum concepts, such as atomic structure and molecular bonding taught in college general chemistry. The objectives of this study were focused on how students use visual tools introduced during instruction to reason with atomic and molecular structure, what misconceptions may be associated with these visual tools, and how visual modeling skills may be taught to support students' use of visual tools for reasoning. The research questions for this study follow from Gilbert's (2008) theory that experts use multiple representations when reasoning and modeling a system, and Kozma and Russell's (2005) theory of representational competence levels. This study finds that as students developed greater command of their understanding of abstract quantum concepts, they spontaneously provided additional representations to describe their more sophisticated models of atomic and molecular structure during interviews. This suggests that when visual modeling with multiple representations is taught, along with the limitations of the representations, it can assist students in the development of models for reasoning about abstract topics such as atomic and molecular structure. There is further gain if students' difficulties with these representations are targeted through the use additional instruction such as a workbook that requires the students to exercise their visual modeling skills.
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet
Rolls, Edmund T.
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.
Rolls, Edmund T
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Illusions of having small or large invisible bodies influence visual perception of object size
van der Hoort, Björn; Ehrsson, H. Henrik
2016-01-01
The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344
Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention
Yu, Chen; Smith, Linda B.
2016-01-01
Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of the present study is to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention – and the sensory-motor behaviors that underlie it – using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention, and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings – like skills in other sensory-motor domains – emerges from multiple pathways to the same functional end. PMID:27016038
Multiple Sensory-Motor Pathways Lead to Coordinated Visual Attention.
Yu, Chen; Smith, Linda B
2017-02-01
Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that the ability to socially coordinate visual attention to an object is essential to healthy developmental outcomes, including language learning. The goal of this study was to understand the complex system of sensory-motor behaviors that may underlie the establishment of joint attention between parents and toddlers. In an experimental task, parents and toddlers played together with multiple toys. We objectively measured joint attention-and the sensory-motor behaviors that underlie it-using a dual head-mounted eye-tracking system and frame-by-frame coding of manual actions. By tracking the momentary visual fixations and hand actions of each participant, we precisely determined just how often they fixated on the same object at the same time, the visual behaviors that preceded joint attention and manual behaviors that preceded and co-occurred with joint attention. We found that multiple sequential sensory-motor patterns lead to joint attention. In addition, there are developmental changes in this multi-pathway system evidenced as variations in strength among multiple routes. We propose that coordinated visual attention between parents and toddlers is primarily a sensory-motor behavior. Skill in achieving coordinated visual attention in social settings-like skills in other sensory-motor domains-emerges from multiple pathways to the same functional end. Copyright © 2016 Cognitive Science Society, Inc.
Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri
2015-01-01
The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154
A Novel Locally Linear KNN Method With Applications to Visual Recognition.
Liu, Qingfeng; Liu, Chengjun
2017-09-01
A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.
ERIC Educational Resources Information Center
Wei, Liew Tze; Sazilah, Salam
2012-01-01
This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…
ERIC Educational Resources Information Center
Rau, Martina A.
2018-01-01
To learn content knowledge in science, technology, engineering, and math domains, students need to make connections among visual representations. This article considers two kinds of connection-making skills: (1) "sense-making skills" that allow students to verbally explain mappings among representations and (2) "perceptual…
ERIC Educational Resources Information Center
Patron, Emelie; Wikman, Susanne; Edfors, Inger; Johansson-Cederblad, Brita; Linder, Cedric
2017-01-01
Visual representations are essential for communication and meaning-making in chemistry, and thus the representational practices play a vital role in the teaching and learning of chemistry. One powerful contemporary model of classroom learning, the variation theory of learning, posits that the way an object of learning gets handled is another vital…
A survey of visual preprocessing and shape representation techniques
NASA Technical Reports Server (NTRS)
Olshausen, Bruno A.
1988-01-01
Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).
An overview of 3D software visualization.
Teyseyre, Alfredo R; Campo, Marcelo R
2009-01-01
Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.
Visual Learning in Application of Integration
NASA Astrophysics Data System (ADS)
Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah
Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.
Visualizing the ground motions of the 1906 San Francisco earthquake
Chourasia, A.; Cutchin, S.; Aagaard, Brad T.
2008-01-01
With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism. ?? 2008 Elsevier Ltd.
Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.
2012-01-01
Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190
The Role of Visual Representations for Structuring Classroom Mathematical Activity
ERIC Educational Resources Information Center
David, Maria Manuela; Tomaz, Vanessa Sena
2012-01-01
It is our presupposition that there is still a need for more research about how classroom practices can exploit the use and power of visualization in mathematics education. The aim of this article is to contribute in this direction, investigating how visual representations can structure geometry activity in the classroom and discussing teaching…
The Nature of Experience Determines Object Representations in the Visual System
ERIC Educational Resources Information Center
Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel
2012-01-01
Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…
ERIC Educational Resources Information Center
Rhodes, Sinead M.; Donaldson, David I.
2007-01-01
Episodic memory depends upon multiple dissociable retrieval processes. Here we investigated the degree to which the processes engaged during successful retrieval are dependent on the properties of the representations that underlie memory for an event. Specifically we examined whether the individual elements of an event can, under some conditions,…
Global Neural Pattern Similarity as a Common Basis for Categorization and Recognition Memory
Xue, Gui; Love, Bradley C.; Preston, Alison R.; Poldrack, Russell A.
2014-01-01
Familiarity, or memory strength, is a central construct in models of cognition. In previous categorization and long-term memory research, correlations have been found between psychological measures of memory strength and activation in the medial temporal lobes (MTLs), which suggests a common neural locus for memory strength. However, activation alone is insufficient for determining whether the same mechanisms underlie neural function across domains. Guided by mathematical models of categorization and long-term memory, we develop a theory and a method to test whether memory strength arises from the global similarity among neural representations. In human subjects, we find significant correlations between global similarity among activation patterns in the MTLs and both subsequent memory confidence in a recognition memory task and model-based measures of memory strength in a category learning task. Our work bridges formal cognitive theories and neuroscientific models by illustrating that the same global similarity computations underlie processing in multiple cognitive domains. Moreover, by establishing a link between neural similarity and psychological memory strength, our findings suggest that there may be an isomorphism between psychological and neural representational spaces that can be exploited to test cognitive theories at both the neural and behavioral levels. PMID:24872552
Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M
2016-02-01
In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.
Invariant visual object recognition: a model, with lighting invariance.
Rolls, Edmund T; Stringer, Simon M
2006-01-01
How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex
Poort, Jasper; Khan, Adil G.; Pachitariu, Marius; Nemri, Abdellatif; Orsolic, Ivana; Krupic, Julija; Bauza, Marius; Sahani, Maneesh; Keller, Georg B.; Mrsic-Flogel, Thomas D.; Hofer, Sonja B.
2015-01-01
Summary We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. PMID:26051421
Freud, Erez; Avidan, Galia; Ganel, Tzvi
2015-02-01
Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information embedded in LSF, whereas HSF information may underlie the visual system's susceptibility to distortions in objects' spatial layouts.
A sensorimotor account of vision and visual consciousness.
O'Regan, J K; Noë, A
2001-10-01
Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual "filling in," visual stability despite eye movements, change blindness, sensory substitution, and color perception.
Basso, Frédéric; Petit, Olivia; Le Bellu, Sophie; Lahlou, Saadi; Cancel, Aïda; Anton, Jean-Luc
2018-06-12
Every day, people are exposed to images of appetizing foods that can lead to high-calorie intake and contribute to overweight and obesity. Research has documented that manipulating the visual perspective from which eating is viewed helps resist temptation by altering the appraisal of unhealthy foods. However, the neural basis of this effect has not yet been examined using neuroimaging methods. Moreover, it is not known whether the benefits of this strategy can be observed when people, especially overweight, are not explicitly asked to imagine themselves eating. Last, it remains to be investigated if visual perspective could be used to promote healthy foods. The present work manipulated camera angles and tested whether visual perspective modulates activity in brain regions associated with taste and reward processing while participants watch videos featuring a hand grasping (unhealthy or healthy) foods from a plate during functional magnetic resonance imagining (fMRI). The plate was filmed from the perspective of the participant (first-person perspective; 1PP), or from a frontal view as if watching someone else eating (third-person perspective; 3PP). Our findings reveal that merely viewing unhealthy food cues from a 1PP (vs. 3PP) increases activity in brain regions that underlie representations of rewarding (appetitive) experiences (amygdala) and food intake (superior parietal gyrus). Additionally, our results show that ventral striatal activity is positively correlated with body mass index (BMI) during exposure to unhealthy foods from a 1PP (vs. 3PP). These findings suggest that unhealthy foods should be promoted through third-person (video) images to weaken the reward associated with their simulated consumption, especially amongst overweight people. It appears however that, as such, manipulating visual perspective fails to enhance the perception of healthy foods. Their promotion thus requires complementary solutions. Copyright © 2018. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Deliyianni, Eleni; Monoyiou, Annita; Elia, Iliada; Georgiou, Chryso; Zannettou, Eleni
2009-01-01
This study investigated the modes of representations generated by kindergarteners and first graders while solving standard and problematic problems in mathematics. Furthermore, it examined the influence of pupils' visual representations on the breach of the didactical contract rules in problem solving. The sample of the study consisted of 38…
ERIC Educational Resources Information Center
Lee, Victor R.
2010-01-01
Visual representations are ubiquitous in modern-day science textbooks and have in recent years become an object of criticism and scrutiny. This article examines the extent to which changes in representations in textbooks published in the USA over the past six decades have invited those critiques. Drawing from a correlational analysis of a corpus…
Importance of perceptual representation in the visual control of action
NASA Astrophysics Data System (ADS)
Loomis, Jack M.; Beall, Andrew C.; Kelly, Jonathan W.; Macuga, Kristen L.
2005-03-01
In recent years, many experiments have demonstrated that optic flow is sufficient for visually controlled action, with the suggestion that perceptual representations of 3-D space are superfluous. In contrast, recent research in our lab indicates that some visually controlled actions, including some thought to be based on optic flow, are indeed mediated by perceptual representations. For example, we have demonstrated that people are able to perform complex spatial behaviors, like walking, driving, and object interception, in virtual environments which are rendered visible solely by cyclopean stimulation (random-dot cinematograms). In such situations, the absence of any retinal optic flow that is correlated with the objects and surfaces within the virtual environment means that people are using stereo-based perceptual representations to perform the behavior. The fact that people can perform such behaviors without training suggests that the perceptual representations are likely the same as those used when retinal optic flow is present. Other research indicates that optic flow, whether retinal or a more abstract property of the perceptual representation, is not the basis for postural control, because postural instability is related to perceived relative motion between self and the visual surroundings rather than to optic flow, even in the abstract sense.
Multi-Voxel Decoding and the Topography of Maintained Information During Visual Working Memory
Lee, Sue-Hyun; Baker, Chris I.
2016-01-01
The ability to maintain representations in the absence of external sensory stimulation, such as in working memory, is critical for guiding human behavior. Human functional brain imaging studies suggest that visual working memory can recruit a network of brain regions from visual to parietal to prefrontal cortex. In this review, we focus on the maintenance of representations during visual working memory and discuss factors determining the topography of those representations. In particular, we review recent studies employing multi-voxel pattern analysis (MVPA) that demonstrate decoding of the maintained content in visual cortex, providing support for a “sensory recruitment” model of visual working memory. However, there is some evidence that maintained content can also be decoded in areas outside of visual cortex, including parietal and frontal cortex. We suggest that the ability to maintain representations during working memory is a general property of cortex, not restricted to specific areas, and argue that it is important to consider the nature of the information that must be maintained. Such information-content is critically determined by the task and the recruitment of specific regions during visual working memory will be both task- and stimulus-dependent. Thus, the common finding of maintained information in visual, but not parietal or prefrontal, cortex may be more of a reflection of the need to maintain specific types of visual information and not of a privileged role of visual cortex in maintenance. PMID:26912997
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Milner-Bolotin, Marina; Nashon, Samson Madera
2012-02-01
Science, engineering and mathematics-related disciplines have relied heavily on a researcher's ability to visualize phenomena under study and being able to link and superimpose various abstract and concrete representations including visual, spatial, and temporal. The spatial representations are especially important in all branches of biology (in developmental biology time becomes an important dimension), where 3D and often 4D representations are crucial for understanding the phenomena. By the time biology students get to undergraduate education, they are supposed to have acquired visual-spatial thinking skills, yet it has been documented that very few undergraduates and a small percentage of graduate students have had a chance to develop these skills to a sufficient degree. The current paper discusses the literature that highlights the essence of visual-spatial thinking and the development of visual-spatial literacy, considers the application of the visual-spatial thinking to biology education, and proposes how modern technology can help to promote visual-spatial literacy and higher order thinking among undergraduate students of biology.
DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide
2013-01-01
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700
Evidence for highly selective neuronal tuning to whole words in the "visual word form area".
Glezer, Laurie S; Jiang, Xiong; Riesenhuber, Maximilian
2009-04-30
Theories of reading have posited the existence of a neural representation coding for whole real words (i.e., an orthographic lexicon), but experimental support for such a representation has proved elusive. Using fMRI rapid adaptation techniques, we provide evidence that the human left ventral occipitotemporal cortex (specifically the "visual word form area," VWFA) contains a representation based on neurons highly selective for individual real words, in contrast to current theories that posit a sublexical representation in the VWFA.
ERIC Educational Resources Information Center
Russo-Zimet, Gila; Segel, Sarit
2014-01-01
This research was designed to examine how early-childhood educators pursuing their graduate degrees perceive the concept of happiness, as conveyed in visual representations. The research methodology combines qualitative and quantitative paradigms using the metaphoric collage, a tool used to analyze visual and verbal aspects. The research…
A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension
ERIC Educational Resources Information Center
Ostarek, Markus; Huettig, Falk
2017-01-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…
ERIC Educational Resources Information Center
Chen, Xiaoning
2017-01-01
With emerging new technologies being applied in teaching and learning, this study compares visual representations in three different high school biology textbook formats and analyses the senses engaged in viewing and understanding the science content represented through these visuals. The findings show that while a similar pattern is observed in…
ERIC Educational Resources Information Center
Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel
2009-01-01
In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…
The role of visual representation in physics learning: dynamic versus static visualization
NASA Astrophysics Data System (ADS)
Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini
2017-11-01
This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p <0.05) in both experimental and control classes. The averages of students’ learning outcomes who are using dynamic visualization media are significantly higher than the class that obtains learning by using static visualization media. It can be seen from the characteristics of visual representation; each visualization provides different understanding support for the students. Dynamic visual media is more suitable for explaining material related to movement or describing a process, whereas static visual media is appropriately used for non-moving physical phenomena and requires long-term observation.
Change blindness and visual memory: visual representations get rich and act poor.
Varakin, D Alexander; Levin, Daniel T
2006-02-01
Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.
Big data in medical informatics: improving education through visual analytics.
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2014-01-01
A continuous effort to improve healthcare education today is currently driven from the need to create competent health professionals able to meet healthcare demands. Limited research reporting how educational data manipulation can help in healthcare education improvement. The emerging research field of visual analytics has the advantage to combine big data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognise visual patterns. The aim of this study was therefore to explore novel ways of representing curriculum and educational data using visual analytics. Three approaches of visualization and representation of educational data were presented. Five competencies at undergraduate medical program level addressed in courses were identified to inaccurately correspond to higher education board competencies. Different visual representations seem to have a potential in impacting on the ability to perceive entities and connections in the curriculum data.
Fine-grained visual marine vessel classification for coastal surveillance and defense applications
NASA Astrophysics Data System (ADS)
Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
Kraehenmann, Rainer; Schmidt, André; Friston, Karl; Preller, Katrin H.; Seifritz, Erich; Vollenweider, Franz X.
2015-01-01
Stimulation of serotonergic neurotransmission by psilocybin has been shown to shift emotional biases away from negative towards positive stimuli. We have recently shown that reduced amygdala activity during threat processing might underlie psilocybin's effect on emotional processing. However, it is still not known whether psilocybin modulates bottom-up or top-down connectivity within the visual-limbic-prefrontal network underlying threat processing. We therefore analyzed our previous fMRI data using dynamic causal modeling and used Bayesian model selection to infer how psilocybin modulated effective connectivity within the visual–limbic–prefrontal network during threat processing. First, both placebo and psilocybin data were best explained by a model in which threat affect modulated bidirectional connections between the primary visual cortex, amygdala, and lateral prefrontal cortex. Second, psilocybin decreased the threat-induced modulation of top-down connectivity from the amygdala to primary visual cortex, speaking to a neural mechanism that might underlie putative shifts towards positive affect states after psilocybin administration. These findings may have important implications for the treatment of mood and anxiety disorders. PMID:26909323
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.
Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from conceptmore » drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.« less
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Problem representation and mathematical problem solving of students of varying math ability.
Krawec, Jennifer L
2014-01-01
The purpose of this study was to examine differences in math problem solving among students with learning disabilities (LD, n = 25), low-achieving students (LA, n = 30), and average-achieving students (AA, n = 29). The primary interest was to analyze the processes students use to translate and integrate problem information while solving problems. Paraphrasing, visual representation, and problem-solving accuracy were measured in eighth grade students using a researcher-modified version of the Mathematical Processing Instrument. Results indicated that both students with LD and LA students struggled with processing but that students with LD were significantly weaker than their LA peers in paraphrasing relevant information. Paraphrasing and visual representation accuracy each accounted for a statistically significant amount of variance in problem-solving accuracy. Finally, the effect of visual representation of relevant information on problem-solving accuracy was dependent on ability; specifically, for students with LD, generating accurate visual representations was more strongly related to problem-solving accuracy than for AA students. Implications for instruction for students with and without LD are discussed.
Visual Image Sensor Organ Replacement: Implementation
NASA Technical Reports Server (NTRS)
Maluf, A. David (Inventor)
2011-01-01
Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.
A review of uncertainty visualization within the IPCC reports
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Reusser, Dominik; Wrobel, Markus
2015-04-01
Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that further translation is required to (visually) present the IPCC results to non-experts, providing tailored static and interactive visualization solutions for different user groups.
ERIC Educational Resources Information Center
Ignatieva, Raisa P.
2011-01-01
The purpose of the study was to uncover the cultural beliefs and values that underlie American and Russian teachers' representations of their professional identities and their understanding of power in education in the context of globally disseminated education reforms and current educational mandates--the No Child Left Behind Act of 2001 (NCLB)…
ERIC Educational Resources Information Center
Stevens, J.A.
2005-01-01
Four experiments were completed to characterize the utilization of visual imagery and motor imagery during the mental representation of human action. In Experiment 1, movement time functions for a motor imagery human locomotion task conformed to a speed-accuracy trade-off similar to Fitts' Law, whereas those for a visual imagery object motion task…
The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew
2005-01-01
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or…
Vernon, Richard J W; Gouws, André D; Lawrence, Samuel J D; Wade, Alex R; Morland, Antony B
2016-05-25
Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to "functional" organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1-V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more "abstracted" representation, typically considered the preserve of "category-selective" extrastriate cortex, can nevertheless emerge in retinotopic regions. Visual areas are typically identified either through retinotopy (e.g., V1-V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. Copyright © 2016 Vernon et al.
Vernon, Richard J. W.; Gouws, André D.; Lawrence, Samuel J. D.; Wade, Alex R.
2016-01-01
Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to “functional” organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1–V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more “abstracted” representation, typically considered the preserve of “category-selective” extrastriate cortex, can nevertheless emerge in retinotopic regions. SIGNIFICANCE STATEMENT Visual areas are typically identified either through retinotopy (e.g., V1–V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. PMID:27225766
The changing demographic, legal, and technological contexts of political representation
Forest, Benjamin
2005-01-01
Three developments have created challenges for political representation in the U.S. and particularly for the use of territorially based representation (election by district). First, the demographic complexity of the U.S. population has grown both in absolute terms and in terms of residential patterns. Second, legal developments since the 1960s have recognized an increasing number of groups as eligible for voting rights protection. Third, the growing technical capacities of computer technology, particularly Geographic Information Systems, have allowed political parties and other organizations to create election districts with increasingly precise political and demographic characteristics. Scholars have made considerable progress in measuring and evaluating the racial and partisan biases of districting plans, and some states have tried to use Geographic Information Systems technology to produce more representative districts. However, case studies of Texas and Arizona illustrate that such analytic and technical advances have not overcome the basic contradictions that underlie the American system of territorial political representation. PMID:16230615
Think spatial: the representation in mental rotation is nonvisual.
Liesefeld, Heinrich R; Zimmer, Hubert D
2013-01-01
For mental rotation, introspection, theories, and interpretations of experimental results imply a certain type of mental representation, namely, visual mental images. Characteristics of the rotated representation can be examined by measuring the influence of stimulus characteristics on rotational speed. If the amount of a given type of information influences rotational speed, one can infer that it was contained in the rotated representation. In Experiment 1, rotational speed of university students (10 men, 11 women) was found to be influenced exclusively by the amount of represented orientation-dependent spatial-relational information but not by orientation-independent spatial-relational information, visual complexity, or the number of stimulus parts. As information in mental-rotation tasks is initially presented visually, this finding implies that at some point during each trial, orientation-dependent information is extracted from visual information. Searching for more direct evidence for this extraction, we recorded the EEG of another sample of university students (12 men, 12 women) during mental rotation of the same stimuli. In an early time window, the observed working memory load-dependent slow potentials were sensitive to the stimuli's visual complexity. Later, in contrast, slow potentials were sensitive to the amount of orientation-dependent information only. We conclude that only orientation-dependent information is contained in the rotated representation. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Gainotti, Guido
2015-04-01
The present review aimed to check two proposals alternative to the original version of the 'semantic hub' hypothesis, based on semantic dementia (SD) data, which assumed that left and right anterior temporal lobes (ATLs) store in a unitary, amodal format all kinds of semantic representations. The first alternative proposal is that the right ATL might subsume non-verbal representations and the left ATL lexical-semantic representations and that only in the advanced stages of SD, when atrophy affects the ATLs bilaterally, the semantic impairment becomes 'multi-modal'. The second alternative suggestion is that right and left ATLs might underlie two different domains of knowledge, because general conceptual knowledge might be supported by the left ATL, and social cognition by the right ATL. Results of the review substantially support the first proposal, showing that the right ATL subsumes non-verbal representations and the left ATL lexical-semantic representations. They are less conclusive about the second suggestion, because the right ATL seems to play a more important role in behavioral and emotional functions than in higher level social cognition. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Spatial and the Visual in Mental Spatial Reasoning: An Ill-Posed Distinction
NASA Astrophysics Data System (ADS)
Schultheis, Holger; Bertel, Sven; Barkowsky, Thomas; Seifert, Inessa
It is an ongoing and controversial debate in cognitive science which aspects of knowledge humans process visually and which ones they process spatially. Similarly, artificial intelligence (AI) and cognitive science research, in building computational cognitive systems, tended to use strictly spatial or strictly visual representations. The resulting systems, however, were suboptimal both with respect to computational efficiency and cognitive plau sibility. In this paper, we propose that the problems in both research strands stem from a mis conception of the visual and the spatial in mental spatial knowl edge pro cessing. Instead of viewing the visual and the spatial as two clearly separable categories, they should be conceptualized as the extremes of a con tinuous dimension of representation. Regarding psychology, a continuous di mension avoids the need to exclusively assign processes and representations to either one of the cate gories and, thus, facilitates a more unambiguous rating of processes and rep resentations. Regarding AI and cognitive science, the con cept of a continuous spatial / visual dimension provides the possibility of rep re sentation structures which can vary continuously along the spatial / visual di mension. As a first step in exploiting these potential advantages of the pro posed conception we (a) introduce criteria allowing for a non-dichotomic judgment of processes and representations and (b) present an approach towards rep re sentation structures that can flexibly vary along the spatial / visual dimension.
2017-01-01
Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R
2018-02-14
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.
Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System
Ajina, Sara; Bridge, Holly
2017-01-01
Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337
Woolgar, Alexandra; Williams, Mark A; Rich, Anina N
2015-04-01
Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Unal, Hasan
2008-01-01
The importance of visualisation and multiple representations in mathematics has been stressed, especially in a context of problem solving. Hanna and Sidoli comment that "Diagrams and other visual representations have long been welcomed as heuristic accompaniments to proof, where they not only facilitate the understanding of theorems and their…
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
A Cross-Modal Perspective on the Relationships between Imagery and Working Memory
Likova, Lora T.
2013-01-01
Mapping the distinctions and interrelationships between imagery and working memory (WM) remains challenging. Although each of these major cognitive constructs is defined and treated in various ways across studies, most accept that both imagery and WM involve a form of internal representation available to our awareness. In WM, there is a further emphasis on goal-oriented, active maintenance, and use of this conscious representation to guide voluntary action. Multicomponent WM models incorporate representational buffers, such as the visuo-spatial sketchpad, plus central executive functions. If there is a visuo-spatial “sketchpad” for WM, does imagery involve the same representational buffer? Alternatively, does WM employ an imagery-specific representational mechanism to occupy our awareness? Or do both constructs utilize a more generic “projection screen” of an amodal nature? To address these issues, in a cross-modal fMRI study, I introduce a novel Drawing-Based Memory Paradigm, and conceptualize drawing as a complex behavior that is readily adaptable from the visual to non-visual modalities (such as the tactile modality), which opens intriguing possibilities for investigating cross-modal learning and plasticity. Blindfolded participants were trained through our Cognitive-Kinesthetic Method (Likova, 2010a, 2012) to draw complex objects guided purely by the memory of felt tactile images. If this WM task had been mediated by transfer of the felt spatial configuration to the visual imagery mechanism, the response-profile in visual cortex would be predicted to have the “top-down” signature of propagation of the imagery signal downward through the visual hierarchy. Remarkably, the pattern of cross-modal occipital activation generated by the non-visual memory drawing was essentially the inverse of this typical imagery signature. The sole visual hierarchy activation was isolated to the primary visual area (V1), and accompanied by deactivation of the entire extrastriate cortex, thus ’cutting-off’ any signal propagation from/to V1 through the visual hierarchy. The implications of these findings for the debate on the interrelationships between the core cognitive constructs of WM and imagery and the nature of internal representations are evaluated. PMID:23346061
Representational neglect for words as revealed by bisection tasks.
Arduino, Lisa S; Marinelli, Chiara Valeria; Pasotti, Fabrizio; Ferrè, Elisa Raffaella; Bottini, Gabriella
2012-03-01
In the present study, we showed that a representational disorder for words can dissociate from both representational neglect for objects and neglect dyslexia. This study involved 14 brain-damaged patients with left unilateral spatial neglect and a group of normal subjects. Patients were divided into four groups based on presence of left neglect dyslexia and representational neglect for non-verbal material, as evaluated by the Clock Drawing test. The patients were presented with bisection tasks for words and lines. The word bisection tasks (with words of five and seven letters) comprised the following: (1) representational bisection: the experimenter pronounced a word and then asked the patient to name the letter in the middle position; (2) visual bisection: same as (1) with stimuli presented visually; and (3) motor bisection: the patient was asked to cross out the letter in the middle position. The standard line bisection task was presented using lines of different length. Consistent with the literature, long lines were bisected to the right and short lines, rendered comparable in length to the words of the word bisection test, deviated to the left (crossover effect). Both patients and controls showed the same leftward bias on words in the visual and motor bisection conditions. A significant difference emerged between the groups only in the case of the representational bisection task, whereas the group exhibiting neglect dyslexia associated with representational neglect for objects showed a significant rightward bias, while the other three patient groups and the controls showed a leftward bisection bias. Neither the presence of neglect alone nor the presence of visual neglect dyslexia was sufficient to produce a specific disorder in mental imagery. These results demonstrate a specific representational neglect for words independent of both representational neglect and neglect dyslexia. ©2011 The British Psychological Society.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N
2011-01-01
To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by representational format and personality type, and the influence of format appears to be modifiable and content dependent.
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
Perceptual processing affects conceptual processing.
Van Dantzig, Saskia; Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W
2008-04-05
According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task in alternation. Responses on the property-verification task were slower for those trials that were preceded by a perceptual trial in a different modality than for those that were preceded by a perceptual trial in the same modality. This finding of a modality-switch effect across perceptual processing and conceptual processing supports the hypothesis that perceptual and conceptual representations are partially based on the same systems. 2008 Cognitive Science Society, Inc.
Brief Report: Autism-like Traits are Associated With Enhanced Ability to Disembed Visual Forms.
Sabatino DiCriscio, Antoinette; Troiani, Vanessa
2017-05-01
Atypical visual perceptual skills are thought to underlie unusual visual attention in autism spectrum disorders. We assessed whether individual differences in visual processing skills scaled with quantitative traits associated with the broader autism phenotype (BAP). Visual perception was assessed using the Figure-ground subtest of the Test of visual perceptual skills-3rd Edition (TVPS). In a large adult cohort (n = 209), TVPS-Figure Ground scores were positively correlated with autistic-like social features as assessed by the Broader autism phenotype questionnaire. This relationship was gender-specific, with males showing a correspondence between visual perceptual skills and autistic-like traits. This work supports the link between atypical visual perception and autism and highlights the importance in characterizing meaningful individual differences in clinically relevant behavioral phenotypes.
COALA-System for Visual Representation of Cryptography Algorithms
ERIC Educational Resources Information Center
Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran
2014-01-01
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…
High Resolution Signal Processing
1993-08-19
Donald Tufts, Journal of Visual Communication and Image Representation, Vol.2, No. 4 PP.395-404, December 1991 "* "Iterative Realization of the...Chen and Donald Tufts , Journal of Visual Communication and Image Representation, Vol.2, No. 4 PP.395-404, December 1991. * "Fast Maximum Likelihood
Visual Representations of DNA Replication: Middle Grades Students' Perceptions and Interpretations
NASA Astrophysics Data System (ADS)
Patrick, Michelle D.; Carter, Glenda; Wiebe, Eric N.
2005-09-01
Visual representations play a critical role in the communication of science concepts for scientists and students alike. However, recent research suggests that novice students experience difficulty extracting relevant information from representations. This study examined students' interpretations of visual representations of DNA replication. Each of the four steps of DNA replication included in the instructional presentation was represented as a text slide, a simple 2D graphic, and a rich 3D graphic. Participants were middle grade girls ( n = 21) attending a summer math and science program. Students' eye movements were measured as they viewed the representations. Participants were interviewed following instruction to assess their perceived salient features. Eye tracking fixation counts indicated that the same features (look zones) in the corresponding 2D and 3D graphics had different salience. The interviews revealed that students used different characteristics such as color, shape, and complexity to make sense of the graphics. The results of this study have implications for the design of instructional representations. Since many students have difficulty distinguishing between relevant and irrelevant information, cueing and directing student attention through the instructional representation could allow cognitive resources to be directed to the most relevant material.
Dotsch, Ron; Wentura, Dirk
2016-01-01
Even though smiles are seen as universal facial expressions, research shows that there exist various kinds of smiles (i.e., affiliative smiles, dominant smiles). Accordingly, we suggest that there also exist various mental representations of smiles. Which representation is employed in cognition may depend on social factors, such as the smiling person’s group membership: Since in-group members are typically seen as more benevolent than out-group members, in-group smiles should be associated with more benevolent social meaning than those conveyed by out-group members. We visualized in-group and out-group smiles with reverse correlation image classification. These visualizations indicated that mental representations of in-group smiles indeed express more benevolent social meaning than those of out-group smiles. The affective meaning of these visualized smiles was not influenced by group membership. Importantly, the effect occurred even though participants were not instructed to attend to the nature of the smile, pointing to an automatic association between group membership and intention. PMID:26963621
Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F
2013-04-01
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.
Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices
Sprague, Thomas C.; Serences, John T.
2014-01-01
Computational theories propose that attention modulates the topographical landscape of spatial ‘priority’ maps in regions of visual cortex so that the location of an important object is associated with higher activation levels. While single-unit recording studies have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here, we used fMRI and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size. PMID:24212672
Visual long-term memory has the same limit on fidelity as visual working memory.
Brady, Timothy F; Konkle, Talia; Gill, Jonathan; Oliva, Aude; Alvarez, George A
2013-06-01
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object's color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
Bankson, B B; Hebart, M N; Groen, I I A; Baker, C I
2018-05-17
Visual object representations are commonly thought to emerge rapidly, yet it has remained unclear to what extent early brain responses reflect purely low-level visual features of these objects and how strongly those features contribute to later categorical or conceptual representations. Here, we aimed to estimate a lower temporal bound for the emergence of conceptual representations by defining two criteria that characterize such representations: 1) conceptual object representations should generalize across different exemplars of the same object, and 2) these representations should reflect high-level behavioral judgments. To test these criteria, we compared magnetoencephalography (MEG) recordings between two groups of participants (n = 16 per group) exposed to different exemplar images of the same object concepts. Further, we disentangled low-level from high-level MEG responses by estimating the unique and shared contribution of models of behavioral judgments, semantics, and different layers of deep neural networks of visual object processing. We find that 1) both generalization across exemplars as well as generalization of object-related signals across time increase after 150 ms, peaking around 230 ms; 2) representations specific to behavioral judgments emerged rapidly, peaking around 160 ms. Collectively, these results suggest a lower bound for the emergence of conceptual object representations around 150 ms following stimulus onset. Copyright © 2018 Elsevier Inc. All rights reserved.
From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches
Potter, Kristin; Rosen, Paul; Johnson, Chris R.
2014-01-01
Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. PMID:25663949
Global neural pattern similarity as a common basis for categorization and recognition memory.
Davis, Tyler; Xue, Gui; Love, Bradley C; Preston, Alison R; Poldrack, Russell A
2014-05-28
Familiarity, or memory strength, is a central construct in models of cognition. In previous categorization and long-term memory research, correlations have been found between psychological measures of memory strength and activation in the medial temporal lobes (MTLs), which suggests a common neural locus for memory strength. However, activation alone is insufficient for determining whether the same mechanisms underlie neural function across domains. Guided by mathematical models of categorization and long-term memory, we develop a theory and a method to test whether memory strength arises from the global similarity among neural representations. In human subjects, we find significant correlations between global similarity among activation patterns in the MTLs and both subsequent memory confidence in a recognition memory task and model-based measures of memory strength in a category learning task. Our work bridges formal cognitive theories and neuroscientific models by illustrating that the same global similarity computations underlie processing in multiple cognitive domains. Moreover, by establishing a link between neural similarity and psychological memory strength, our findings suggest that there may be an isomorphism between psychological and neural representational spaces that can be exploited to test cognitive theories at both the neural and behavioral levels. Copyright © 2014 the authors 0270-6474/14/347472-13$15.00/0.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-09-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.
Cross-Modal Retrieval With CNN Visual Features: A New Baseline.
Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng
2017-02-01
Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.
A comparison of haptic material perception in blind and sighted individuals.
Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R
2015-10-01
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-01-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566
The Characteristics and Limits of Rapid Visual Categorization
Fabre-Thorpe, Michèle
2011-01-01
Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180
The media of sociology: tight or loose translations?
Guggenheim, Michael
2015-06-01
Sociologists have increasingly come to recognize that the discipline has unduly privileged textual representations, but efforts to incorporate visual and other media are still only in their beginning. This paper develops an analysis of the ways objects of knowledge are translated into other media, in order to understand the visual practices of sociology and to point out unused possibilities. I argue that the discourse on visual sociology, by assuming that photographs are less objective than text, is based on an asymmetric media-determinism and on a misleading notion of objectivity. Instead, I suggest to analyse media with the concept of translations. I introduce several kinds of translations, most centrally the distinction between tight and loose ones. I show that many sciences, such as biology, focus on tight translations, using a variety of media and manipulating both research objects and representations. Sociology, in contrast, uses both tight and loose translations, but uses the latter only for texts. For visuals, sociology restricts itself to what I call 'the documentary': focusing on mechanical recording technologies without manipulating either the object of research or the representation. I conclude by discussing three rare examples of what is largely excluded in sociology: visual loose translations, visual tight translations based on non-mechanical recording technologies, and visual tight translations based on mechanical recording technologies that include the manipulation of both object and representation. © London School of Economics and Political Science 2015.
Carl Linnaeus and the visual representation of nature.
Charmantier, Isabelle
2011-01-01
The Swedish naturalist Carl Linnaeus (1707-1778) is reputed to have transformed botanical practice by shunning the process of illustrating plants and relying on the primacy of literary descriptions of plant specimens. Botanists and historians have long debated Linnaeus's capacities as a draftsman. While some of his detailed sketches of plants and insects reveal a sure hand, his more general drawings of landscapes and people seem ill-executed. The overwhelming consensus, based mostly on his Lapland diary (1732), is that Linnaeus could not draw. Little has been said, however, on the role of drawing and other visual representations in Linnaeus's daily work as seen in his other numerous manuscripts. These manuscripts, held mostly at the Linnean Society of London, are peppered with sketches, maps, tables, and diagrams. Reassessing these manuscripts, along with the printed works that also contain illustrations of plant species, shows that Linnaeus's thinking was profoundly visual and that he routinely used visual representational devices in his various publications. This paper aims to explore the full range of visual representations Linnaeus used through his working life, and to reevaluate the epistemological value of visualization in the making of natural knowledge. By analyzing Linnaeus's use of drawings, maps, tables, and diagrams, I will show that he did not, as has been asserted, reduce the discipline of botany to text, and that his visual thinking played a fundamental role in his construction of new systems of classification.
Hemispheric asymmetry of liking for representational and abstract paintings.
Nadal, Marcos; Schiavi, Susanna; Cattaneo, Zaira
2017-10-13
Although the neural correlates of the appreciation of aesthetic qualities have been the target of much research in the past decade, few experiments have explored the hemispheric asymmetries in underlying processes. In this study, we used a divided visual field paradigm to test for hemispheric asymmetries in men and women's preference for abstract and representational artworks. Both male and female participants liked representational paintings more when presented in the right visual field, whereas preference for abstract paintings was unaffected by presentation hemifield. We hypothesize that this result reflects a facilitation of the sort of visual processes relevant to laypeople's liking for art-specifically, local processing of highly informative object features-when artworks are presented in the right visual field, given the left hemisphere's advantage in processing such features.
Student Interpretations of Phylogenetic Trees in an Introductory Biology Course
ERIC Educational Resources Information Center
Dees, Jonathan; Momsen, Jennifer L.; Niemi, Jarad; Montplaisir, Lisa
2014-01-01
Phylogenetic trees are widely used visual representations in the biological sciences and the most important visual representations in evolutionary biology. Therefore, phylogenetic trees have also become an important component of biology education. We sought to characterize reasoning used by introductory biology students in interpreting taxa…
Eye Detection and Tracking for Intelligent Human Computer Interaction
2006-02-01
Meer and I. Weiss, “Smoothed Differentiation Filters for Images”, Journal of Visual Communication and Image Representation, 3(1):58-72, 1992. [13...25] P. Meer and I. Weiss. “Smoothed differentiation filters for images”. Journal of Visual Communication and Image Representation, 3(1), 1992
Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location
Kanwisher, Nancy
2012-01-01
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434
[Neural basis of self-face recognition: social aspects].
Sugiura, Motoaki
2012-07-01
Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.
A Review of Visual Representations of Physiologic Data
2016-01-01
Background Physiological data is derived from electrodes attached directly to patients. Modern patient monitors are capable of sampling data at frequencies in the range of several million bits every hour. Hence the potential for cognitive threat arising from information overload and diminished situational awareness becomes increasingly relevant. A systematic review was conducted to identify novel visual representations of physiologic data that address cognitive, analytic, and monitoring requirements in critical care environments. Objective The aims of this review were to identify knowledge pertaining to (1) support for conveying event information via tri-event parameters; (2) identification of the use of visual variables across all physiologic representations; (3) aspects of effective design principles and methodology; (4) frequency of expert consultations; (5) support for user engagement and identifying heuristics for future developments. Methods A review was completed of papers published as of August 2016. Titles were first collected and analyzed using an inclusion criteria. Abstracts resulting from the first pass were then analyzed to produce a final set of full papers. Each full paper was passed through a data extraction form eliciting data for comparative analysis. Results In total, 39 full papers met all criteria and were selected for full review. Results revealed great diversity in visual representations of physiological data. Visual representations spanned 4 groups including tabular, graph-based, object-based, and metaphoric displays. The metaphoric display was the most popular (n=19), followed by waveform displays typical to the single-sensor-single-indicator paradigm (n=18), and finally object displays (n=9) that utilized spatiotemporal elements to highlight changes in physiologic status. Results obtained from experiments and evaluations suggest specifics related to the optimal use of visual variables, such as color, shape, size, and texture have not been fully understood. Relationships between outcomes and the users’ involvement in the design process also require further investigation. A very limited subset of visual representations (n=3) support interactive functionality for basic analysis, while only one display allows the user to perform analysis including more than one patient. Conclusions Results from the review suggest positive outcomes when visual representations extend beyond the typical waveform displays; however, there remain numerous challenges. In particular, the challenge of extensibility limits their applicability to certain subsets or locations, challenge of interoperability limits its expressiveness beyond physiologic data, and finally the challenge of instantaneity limits the extent of interactive user engagement. PMID:27872033
A Novel Cylindrical Representation for Characterizing Intrinsic Properties of Protein Sequences.
Yu, Jia-Feng; Dou, Xiang-Hua; Wang, Hong-Bo; Sun, Xiao; Zhao, Hui-Ying; Wang, Ji-Hua
2015-06-22
The composition and sequence order of amino acid residues are the two most important characteristics to describe a protein sequence. Graphical representations facilitate visualization of biological sequences and produce biologically useful numerical descriptors. In this paper, we propose a novel cylindrical representation by placing the 20 amino acid residue types in a circle and sequence positions along the z axis. This representation allows visualization of the composition and sequence order of amino acids at the same time. Ten numerical descriptors and one weighted numerical descriptor have been developed to quantitatively describe intrinsic properties of protein sequences on the basis of the cylindrical model. Their applications to similarity/dissimilarity analysis of nine ND5 proteins indicated that these numerical descriptors are more effective than several classical numerical matrices. Thus, the cylindrical representation obtained here provides a new useful tool for visualizing and charactering protein sequences. An online server is available at http://biophy.dzu.edu.cn:8080/CNumD/input.jsp .
A visual analysis of gender bias in contemporary anatomy textbooks.
Parker, Rhiannon; Larkin, Theresa; Cockburn, Jon
2017-05-01
Empirical research has linked gender bias in medical education with negative attitudes and behaviors in healthcare providers. Yet it has been more than 20 years since research has considered the degree to which women and men are equally represented in anatomy textbooks. Furthermore, previous research has not explored beyond quantity of representation to also examine visual gender stereotypes and, in light of theoretical advancements in the area of intersectional research, the relationship between representations of gender and representations of ethnicity, body type, health, and age. This study aimed to determine the existence and representation of gender bias in the major anatomy textbooks used at Australian Medical Schools. A systematic visual content analysis was conducted on 6044 images in which sex/gender could be identified, sourced from 17 major anatomy textbooks published from 2008 to 2013. Further content analysis was performed on the 521 narrative images, which represent an unfolding story, found within the same textbooks. Results indicate that the representation of gender in images from anatomy textbooks remain predominantly male except within sex-specific sections. Further, other forms of bias were found to exist in: the visualization of stereotypical gendered emotions, roles and settings; the lack of ethnic, age, and body type diversity; and in the almost complete adherence to a sex/gender binary. Despite increased attention to gender issues in medicine, the visual representation of gender in medical curricula continues to be biased. The biased construction of gender in anatomy textbooks designed for medical education provides future healthcare providers with inadequate and unrealistic information about patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hacker, Silke; Handels, Heinz
2006-03-01
Computer-based 3D atlases allow an interactive exploration of the human body. However, in most cases such 3D atlases are derived from one single individual, and therefore do not regard the variability of anatomical structures concerning their shape and size. Since the geometric variability across humans plays an important role in many medical applications, our goal is to develop a framework of an anatomical atlas for representation and visualization of the variability of selected anatomical structures. The basis of the project presented is the VOXEL-MAN atlas of inner organs that was created from the Visible Human data set. For modeling anatomical shapes and their variability we utilize "m-reps" which allow a compact representation of anatomical objects on the basis of their skeletons. As an example we used a statistical model of the kidney that is based on 48 different variants. With the integration of a shape description into the VOXEL-MAN atlas it is now possible to query and visualize different shape variations of an organ, e.g. by specifying a person's age or gender. In addition to the representation of individual shape variants, the average shape of a population can be displayed. Besides a surface representation, a volume-based representation of the kidney's shape variants is also possible. It results from the deformation of the reference kidney of the volume-based model using the m-rep shape description. In this way a realistic visualization of the shape variants becomes possible, as well as the visualization of the organ's internal structures.
Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren
2012-10-01
Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.
Facilitating Mathematical Practices through Visual Representations
ERIC Educational Resources Information Center
Murata, Aki; Stewart, Chana
2017-01-01
Effective use of mathematical representation is key to supporting student learning. In "Principles to Actions: Ensuring Mathematical Success for All" (NCTM 2014), "use and connect mathematical representations" is one of the effective Mathematics Teaching Practices. By using different representations, students examine concepts…
Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald
2016-11-01
The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Visual Working Memory Is Independent of the Cortical Spacing Between Memoranda.
Harrison, William J; Bays, Paul M
2018-03-21
The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others while preventing perceptual interference between stimuli. We found clear evidence that short-term memory is independent of the intracortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception. Copyright © 2018 Harrison and Bays.
Carlisle, Nancy B.; Woodman, Geoffrey F.
2014-01-01
Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention. PMID:21254796
Differences in peripheral sensory input to the olfactory bulb between male and female mice
NASA Astrophysics Data System (ADS)
Kass, Marley D.; Czarnecki, Lindsey A.; Moberly, Andrew H.; McGann, John P.
2017-04-01
Female mammals generally have a superior sense of smell than males, but the biological basis of this difference is unknown. Here, we demonstrate sexually dimorphic neural coding of odorants by olfactory sensory neurons (OSNs), primary sensory neurons that physically contact odor molecules in the nose and provide the initial sensory input to the brain’s olfactory bulb. We performed in vivo optical neurophysiology to visualize odorant-evoked OSN synaptic output into olfactory bub glomeruli in unmanipulated (gonad-intact) adult mice from both sexes, and found that in females odorant presentation evoked more rapid OSN signaling over a broader range of OSNs than in males. These spatiotemporal differences enhanced the contrast between the neural representations of chemically related odorants in females compared to males during stimulus presentation. Removing circulating sex hormones makes these signals slower and less discriminable in females, while in males they become faster and more discriminable, suggesting opposite roles for gonadal hormones in influencing male and female olfactory function. These results demonstrate that the famous sex difference in olfactory abilities likely originates in the primary sensory neurons, and suggest that hormonal modulation of the peripheral olfactory system could underlie differences in how males and females experience the olfactory world.
The neural subjective frame: from bodily signals to perceptual consciousness
Park, Hyeong-Dong; Tallon-Baudry, Catherine
2014-01-01
The report ‘I saw the stimulus’ operationally defines visual consciousness, but where does the ‘I’ come from? To account for the subjective dimension of perceptual experience, we introduce the concept of the neural subjective frame. The neural subjective frame would be based on the constantly updated neural maps of the internal state of the body and constitute a neural referential from which first person experience can be created. We propose to root the neural subjective frame in the neural representation of visceral information which is transmitted through multiple anatomical pathways to a number of target sites, including posterior insula, ventral anterior cingulate cortex, amygdala and somatosensory cortex. We review existing experimental evidence showing that the processing of external stimuli can interact with visceral function. The neural subjective frame is a low-level building block of subjective experience which is not explicitly experienced by itself which is necessary but not sufficient for perceptual experience. It could also underlie other types of subjective experiences such as self-consciousness and emotional feelings. Because the neural subjective frame is tightly linked to homeostatic regulations involved in vigilance, it could also make a link between state and content consciousness. PMID:24639580
The neural subjective frame: from bodily signals to perceptual consciousness.
Park, Hyeong-Dong; Tallon-Baudry, Catherine
2014-05-05
The report 'I saw the stimulus' operationally defines visual consciousness, but where does the 'I' come from? To account for the subjective dimension of perceptual experience, we introduce the concept of the neural subjective frame. The neural subjective frame would be based on the constantly updated neural maps of the internal state of the body and constitute a neural referential from which first person experience can be created. We propose to root the neural subjective frame in the neural representation of visceral information which is transmitted through multiple anatomical pathways to a number of target sites, including posterior insula, ventral anterior cingulate cortex, amygdala and somatosensory cortex. We review existing experimental evidence showing that the processing of external stimuli can interact with visceral function. The neural subjective frame is a low-level building block of subjective experience which is not explicitly experienced by itself which is necessary but not sufficient for perceptual experience. It could also underlie other types of subjective experiences such as self-consciousness and emotional feelings. Because the neural subjective frame is tightly linked to homeostatic regulations involved in vigilance, it could also make a link between state and content consciousness.
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
Heuer, Anna; Schubö, Anna
2016-01-01
Visual working memory can be modulated according to changes in the cued task relevance of maintained items. Here, we investigated the mechanisms underlying this modulation. In particular, we studied the consequences of attentional selection for selected and unselected items, and the role of individual differences in the efficiency with which attention is deployed. To this end, performance in a visual working memory task as well as the CDA/SPCN and the N2pc, ERP components associated with visual working memory and attentional processes, were analysed. Selection during the maintenance stage was manipulated by means of two successively presented retrocues providing spatial information as to which items were most likely to be tested. Results show that attentional selection serves to robustly protect relevant representations in the focus of attention while unselected representations which may become relevant again still remain available. Individuals with larger retrocueing benefits showed higher efficiency of attentional selection, as indicated by the N2pc, and showed stronger maintenance-associated activity (CDA/SPCN). The findings add to converging evidence that focused representations are protected, and highlight the flexibility of visual working memory, in which information can be weighted according its relevance.
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
An Evaluation of Multimodal Interactions with Technology while Learning Science Concepts
ERIC Educational Resources Information Center
Anastopoulou, Stamatina; Sharples, Mike; Baber, Chris
2011-01-01
This paper explores the value of employing multiple modalities to facilitate science learning with technology. In particular, it is argued that when multiple modalities are employed, learners construct strong relations between physical movement and visual representations of motion. Body interactions with visual representations, enabled by…
Comparing Visual Representations of DNA in Two Multimedia Presentations
ERIC Educational Resources Information Center
Cook, Michelle; Wiebe, Eric; Carter, Glenda
2011-01-01
This study is part of an ongoing research project examining middle school girls' attention to and interpretation of visual representations of DNA replication. Specifically, this research examined differences between two different versions of a multimedia presentation on DNA, where the second version of the presentation was redesigned as a result…
Three Strategies for the Critical Use of Statistical Methods in Psychological Research
ERIC Educational Resources Information Center
Campitelli, Guillermo; Macbeth, Guillermo; Ospina, Raydonal; Marmolejo-Ramos, Fernando
2017-01-01
We present three strategies to replace the null hypothesis statistical significance testing approach in psychological research: (1) visual representation of cognitive processes and predictions, (2) visual representation of data distributions and choice of the appropriate distribution for analysis, and (3) model comparison. The three strategies…
Is This Real Life? Is This Just Fantasy?: Realism and Representations in Learning with Technology
NASA Astrophysics Data System (ADS)
Sauter, Megan Patrice
Students often engage in hands-on activities during science learning; however, financial and practical constraints often limit the availability of these activities. Recent advances in technology have led to increases in the use of simulations and remote labs, which attempt to recreate hands-on science learning via computer. Remote labs and simulations are interesting from a cognitive perspective because they allow for different relations between representations and their referents. Remote labs are unique in that they provide a yoked representation, meaning that the representation of the lab on the computer screen is actually linked to that which it represents: a real scientific device. Simulations merely represent the lab and are not connected to any real scientific devices. However, the type of visual representations used in the lab may modify the effects of the lab technology. The purpose of this dissertation is to examine the relation between representation and technology and its effects of students' psychological experiences using online science labs. Undergraduates participated in two studies that investigated the relation between technology and representation. In the first study, participants performed either a remote lab or a simulation incorporating one of two visual representations, either a static image or a video of the equipment. Although participants in both lab conditions learned, participants in the remote lab condition had more authentic experiences. However, effects were moderated by the realism of the visual representation. Participants who saw a video were more invested and felt the experience was more authentic. In a second study, participants performed a remote lab and either saw the same video as in the first study, an animation, or the video and an animation. Most participants had an authentic experience because both representations evoked strong feelings of presence. However, participants who saw the video were more likely to believe the remote technology was real. Overall, the findings suggest that participants' experiences with technology were shaped by representation. Students had more authentic experiences using the remote lab than the simulation. However, incorporating visual representations that enhance presence made these experiences even more authentic and meaningful than afforded by the technology alone.
The Elicitation Interview Technique: Capturing People's Experiences of Data Representations.
Hogan, Trevor; Hinrichs, Uta; Hornecker, Eva
2016-12-01
Information visualization has become a popular tool to facilitate sense-making, discovery and communication in a large range of professional and casual contexts. However, evaluating visualizations is still a challenge. In particular, we lack techniques to help understand how visualizations are experienced by people. In this paper we discuss the potential of the Elicitation Interview technique to be applied in the context of visualization. The Elicitation Interview is a method for gathering detailed and precise accounts of human experience. We argue that it can be applied to help understand how people experience and interpret visualizations as part of exploration and data analysis processes. We describe the key characteristics of this interview technique and present a study we conducted to exemplify how it can be applied to evaluate data representations. Our study illustrates the types of insights this technique can bring to the fore, for example, evidence for deep interpretation of visual representations and the formation of interpretations and stories beyond the represented data. We discuss general visualization evaluation scenarios where the Elicitation Interview technique may be beneficial and specify what needs to be considered when applying this technique in a visualization context specifically.
Attention affects visual perceptual processing near the hand.
Cosman, Joshua D; Vecera, Shaun P
2010-09-01
Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.
Visualizing the semantic content of large text databases using text maps
NASA Technical Reports Server (NTRS)
Combs, Nathan
1993-01-01
A methodology for generating text map representations of the semantic content of text databases is presented. Text maps provide a graphical metaphor for conceptualizing and visualizing the contents and data interrelationships of large text databases. Described are a set of experiments conducted against the TIPSTER corpora of Wall Street Journal articles. These experiments provide an introduction to current work in the representation and visualization of documents by way of their semantic content.
Subject-level differences in reported locations of cutaneous tactile and nociceptive stimuli
Steenbergen, Peter; Buitenweg, Jan R.; Trojan, Jörg; Klaassen, Bart; Veltink, Peter H.
2012-01-01
Recent theoretical advances on the topic of body representations have raised the question whether spatial perception of touch and nociception involve the same representations. Various authors have established that subjective localizations of touch and nociception are displaced in a systematic manner. The relation between veridical stimulus locations and localizations can be described in the form of a perceptual map; these maps differ between subjects. Recently, evidence was found for a common set of body representations to underlie spatial perception of touch and slow and fast pain, which receive information from modality specific primary representations. There are neurophysiological clues that the various cutaneous senses may not share the same primary representation. If this is the case, then differences in primary representations between touch and nociception may cause subject-dependent differences in perceptual maps of these modalities. We studied localization of tactile and nociceptive sensations on the forearm using electrocutaneous stimulation. The perceptual maps of these modalities differed at the group level. When assessed for individual subjects, the differences localization varied in nature between subjects. The agreement of perceptual maps of the two modalities was moderate. These findings are consistent with a common internal body representation underlying spatial perception of touch and nociception. The subject level differences suggest that in addition to these representations other aspects, possibly differences in primary representation and/or the influence of stimulus parameters, lead to differences in perceptual maps in individuals. PMID:23226126
Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.
2015-01-01
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704
Feature integration and object representations along the dorsal stream visual hierarchy
Perry, Carolyn Jeane; Fallah, Mazyar
2014-01-01
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147
The functional architecture of the ventral temporal cortex and its role in categorization
Grill-Spector, Kalanit; Weiner, Kevin S.
2014-01-01
Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370
Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L
2005-11-01
With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.
Attention during natural vision warps semantic representation across the human brain.
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G; Gallant, Jack L
2013-06-01
Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
Hollingworth, Andrew; Hwang, Seongmin
2013-10-19
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection.
Gabbard, Carl; Lee, Jihye; Caçola, Priscila
2013-01-01
This study examined the role of visual working memory when transforming visual representations to motor representations in the context of motor imagery. Participants viewed randomized number sequences of three, four, and five digits, and then reproduced the sequence by finger tapping using motor imagery or actually executing the movements; movement duration was recorded. One group viewed the stimulus for three seconds and responded immediately, while the second group had a three-second view followed by a three-second blank screen delay before responding. As expected, delay group times were longer with each condition and digit load. Whereas correlations between imagined and executed actions (temporal congruency) were significant in a positive direction for both groups, interestingly, the delay group's values were significantly stronger. That outcome prompts speculation that delay influenced the congruency between motor representation and actual execution.
NASA Astrophysics Data System (ADS)
Matuk, Camillia Faye
Visual representations are central to expert scientific thinking. Meanwhile, novices tend toward narrative conceptions of scientific phenomena. Until recently, however, relationships between visual design, narrative thinking, and their impacts on learning science have only been theoretically pursued. This dissertation first synthesizes different disciplinary perspectives, then offers a mixed-methods investigation into interpretations of scientific representations. Finally, it considers design issues associated with narrative and visual imagery, and explores the possibilities of a pedagogical notation to scaffold the understanding of a standard scientific notation. Throughout, I distinguish two categories of visual media by their relation to narrative: Narrative visual media, which convey content via narrative structure, and Conceptual visual media, which convey states of relationships among objects. Given the role of narrative in framing conceptions of scientific phenomena and perceptions of its representations, I suggest that novices are especially prone to construe both kinds of media in narrative terms. To illustrate, I first describe how novices make meaning of the science conveyed in narrative visual media. Vignettes of an undergraduate student's interpretation of a cartoon about natural selection; and of four 13-year olds' readings of a comic book about human papillomavirus infection, together demonstrate conditions under which designed visual narrative elements facilitate or hinder understanding. I next consider the interpretation of conceptual visual media with an example of an expert notation from evolutionary biology, the cladogram. By combining clinical interview methods with experimental design, I show how undergraduate students' narrative theories of evolution frame perceptions of the diagram (Study 1); I demonstrate the flexibility of symbolic meaning, both with the content assumed (Study 2A), and with alternate manners of presenting the diagram (Study 2B); finally, I show the effects of content assumptions on the diagrams students invent of phylogenetic data (Study 3A), and how first inventing a diagram influences later interpretations of the standard notation (Study 3B). Lastly, I describe the prototype design and pilot test of an interactive diagram to scaffold biology students' understanding of this expert scientific notation. Insights from this dissertation inform the design of more pedagogically useful representations that might support students' developing fluency with expert scientific representations.
Ambiguous science and the visual representation of the real
NASA Astrophysics Data System (ADS)
Newbold, Curtis Robert
The emergence of visual media as prominent and even expected forms of communication in nearly all disciplines, including those scientific, has raised new questions about how the art and science of communication epistemologically affect the interpretation of scientific phenomena. In this dissertation I explore how the influence of aesthetics in visual representations of science inevitably creates ambiguous meanings. As a means to improve visual literacy in the sciences, I call awareness to the ubiquity of visual ambiguity and its importance and relevance in scientific discourse. To do this, I conduct a literature review that spans interdisciplinary research in communication, science, art, and rhetoric. Furthermore, I create a paradoxically ambiguous taxonomy, which functions to exploit the nuances of visual ambiguities and their role in scientific communication. I then extrapolate the taxonomy of visual ambiguity and from it develop an ambiguous, rhetorical heuristic, the Tetradic Model of Visual Ambiguity. The Tetradic Model is applied to a case example of a scientific image as a demonstration of how scientific communicators may increase their awareness of the epistemological effects of ambiguity in the visual representations of science. I conclude by demonstrating how scientific communicators may make productive use of visual ambiguity, even in communications of objective science, and I argue how doing so strengthens scientific communicators' visual literacy skills and their ability to communicate more ethically and effectively.
NASA Astrophysics Data System (ADS)
Wilder, Anna
The purpose of this study was to investigate the effects of a visualization-centered curriculum, Hemoglobin: A Case of Double Identity, on conceptual understanding and representational competence in high school biology. Sixty-nine students enrolled in three sections of freshman biology taught by the same teacher participated in this study. Online Chemscape Chime computer-based molecular visualizations were incorporated into the 10-week curriculum to introduce students to fundamental structure and function relationships. Measures used in this study included a Hemoglobin Structure and Function Test, Mental Imagery Questionnaire, Exam Difficulty Survey, the Student Assessment of Learning Gains, the Group Assessment of Logical Thinking, the Attitude Toward Science in School Assessment, audiotapes of student interviews, students' artifacts, weekly unit activity surveys, informal researcher observations and a teacher's weekly questionnaire. The Hemoglobin Structure and Function Test, consisting of Parts A and B, was administered as a pre and posttest. Part A used exclusively verbal test items to measure conceptual understanding, while Part B used visual-verbal test items to measure conceptual understanding and representational competence. Results of the Hemoglobin Structure and Function pre and posttest revealed statistically significant gains in conceptual understanding and representational competence, suggesting the visualization-centered curriculum implemented in this study was effective in supporting positive learning outcomes. The large positive correlation between posttest results on Part A, comprised of all-verbal test items, and Part B, using visual-verbal test items, suggests this curriculum supported students' mutual development of conceptual understanding and representational competence. Evidence based on student interviews, Student Assessment of Learning Gains ratings and weekly activity surveys indicated positive attitudes toward the use of Chemscape Chime software and the computer-based molecular visualization activities as learning tools. Evidence from these same sources also indicated that students felt computer-based molecular visualization activities in conjunction with other classroom activities supported their learning. Implications for instructional design are discussed.
ERIC Educational Resources Information Center
van Garderen, Delinda; Scheuermann, Amy; Poch, Apryl; Murray, Mary M.
2018-01-01
The use of visual representations (VRs) in mathematics is a strongly recommended practice in special education. Although recommended, little is known about special educators' knowledge of and instructional emphasis about VRs. Therefore, in this study, the authors examined special educators' own knowledge of and their instructional emphasis with…
Visual Hemispheric Specialization: A Computational Theory. Technical Report #7.
ERIC Educational Resources Information Center
Kosslyn, Stephen M.
Visual recognition, navigation, tracking, and imagery are posited to involve some of the same types of representations and processes. The first part of this paper develops a theory of some of the shared types of representations and processing modules. The theory is developed in light of neurophysiological and neuroanatomical data from non-human…
The Role of Visual Experience on the Representation and Updating of Novel Haptic Scenes
ERIC Educational Resources Information Center
Pasqualotto, Achille; Newell, Fiona N.
2007-01-01
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to…
External Visual Representations in Science Learning: The Case of Relations among System Components
ERIC Educational Resources Information Center
Eilam, Billie; Poyas, Yael
2010-01-01
How do external visual representations (e.g., graph, diagram) promote or constrain students' ability to identify system components and their interrelations, to reinforce a systemic view through the application of the STS approach? University students (N = 150) received information cards describing cellphones' communication system and its subsystem…
ERIC Educational Resources Information Center
Demmans Epp, Carrie; Bull, Susan
2015-01-01
Adding uncertainty information to visualizations is becoming increasingly common across domains since its addition helps ensure that informed decisions are made. This work has shown the difficulty that is inherent to representing uncertainty. Moreover, the representation of uncertainty has yet to be thoroughly explored in educational domains even…
ERIC Educational Resources Information Center
Santos-Trigo, Manuel; Espinosa-Perez, Hugo; Reyes-Rodriguez, Aaron
2006-01-01
Technological tools have the potential to offer students the possibility to represent information and relationships embedded in problems and concepts in ways that involve numerical, algebraic, geometric, and visual approaches. In this paper, the authors present and discuss an example in which an initial representation of a mathematical object…
Readers Building Fictional Worlds: Visual Representations, Poetry and Cognition
ERIC Educational Resources Information Center
Giovanelli, Marcello
2017-01-01
This article explores the complex nature of the literature classroom by drawing on the cognitive linguistic framework "Text World Theory" to examine the teacher's role as facilitator and mediator of reading. Specifically, the article looks at how one teacher used visual representations as a way of allowing students to engage in a more…
ERIC Educational Resources Information Center
Abdullah, Nasarudin; Halim, Lilia; Zakaria, Effandi
2014-01-01
This study aimed to determine the impact of strategic thinking and visual representation approaches (VStops) on the achievement, conceptual knowledge, metacognitive awareness, awareness of problem-solving strategies, and student attitudes toward mathematical word problem solving among primary school students. The experimental group (N = 96)…
ERIC Educational Resources Information Center
Zelik, Daniel J.
2012-01-01
Cognitive Systems Engineering (CSE) has a history built, in part, on leveraging representational design to improve system performance. Traditionally, however, CSE has focused on visual representation of "monitored" processes--active, ongoing, and interconnected activities occurring in a system of interest and monitored by human…
ERIC Educational Resources Information Center
Parnafes, Orit
2012-01-01
This article presents a theoretical model of the process by which students construct and elaborate explanations of scientific phenomena using visual representations. The model describes progress in the underlying conceptual processes in students' explanations as a reorganization of fine-grained knowledge elements based on the Knowledge in Pieces…
Stereotyping in he Representation of Narrative Texts through Visual Reformulation.
ERIC Educational Resources Information Center
Porto, Melina
2003-01-01
Investigated the process of stereotyping in the representation of the content of narrative texts through visual reformulations. Subjects were Argentine college students enrolled in an English course at a university in Argentina. Reveals students' inability to transcend heir cultural biases and points to an urgent need to address stereotypes in the…
Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.
ERIC Educational Resources Information Center
Biederman, Irving; Cooper, Eric E.
1991-01-01
Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko
2014-01-01
The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations. PMID:25538637
Grajski, Kamil A.
2016-01-01
Mechanisms underlying the emergence and plasticity of representational discontinuities in the mammalian primary somatosensory cortical representation of the hand are investigated in a computational model. The model consists of an input lattice organized as a three-digit hand forward-connected to a lattice of cortical columns each of which contains a paired excitatory and inhibitory cell. Excitatory and inhibitory synaptic plasticity of feedforward and lateral connection weights is implemented as a simple covariance rule and competitive normalization. Receptive field properties are computed independently for excitatory and inhibitory cells and compared within and across columns. Within digit representational zones intracolumnar excitatory and inhibitory receptive field extents are concentric, single-digit, small, and unimodal. Exclusively in representational boundary-adjacent zones, intracolumnar excitatory and inhibitory receptive field properties diverge: excitatory cell receptive fields are single-digit, small, and unimodal; and the paired inhibitory cell receptive fields are bimodal, double-digit, and large. In simulated syndactyly (webbed fingers), boundary-adjacent intracolumnar receptive field properties reorganize to within-representation type; divergent properties are reacquired following syndactyly release. This study generates testable hypotheses for assessment of cortical laminar-dependent receptive field properties and plasticity within and between cortical representational zones. For computational studies, present results suggest that concurrent excitatory and inhibitory plasticity may underlie novel emergent properties. PMID:27504086
Hemisphere-Dependent Attentional Modulation of Human Parietal Visual Field Representations
Silver, Michael A.
2015-01-01
Posterior parietal cortex contains several areas defined by topographically organized maps of the contralateral visual field. However, recent studies suggest that ipsilateral stimuli can elicit larger responses in the right than left hemisphere within these areas, depending on task demands. Here we determined the effects of spatial attention on the set of visual field locations (the population receptive field [pRF]) that evoked a response for each voxel in human topographic parietal cortex. A two-dimensional Gaussian was used to model the pRF in each voxel, and we measured the effects of attention on not only the center (preferred visual field location) but also the size (visual field extent) of the pRF. In both hemispheres, larger pRFs were associated with attending to the mapping stimulus compared with attending to a central fixation point. In the left hemisphere, attending to the stimulus also resulted in more peripheral preferred locations of contralateral representations, compared with attending fixation. These effects of attention on both pRF size and preferred location preserved contralateral representations in the left hemisphere. In contrast, attentional modulation of pRF size but not preferred location significantly increased representation of the ipsilateral (right) visual hemifield in right parietal cortex. Thus, attention effects in topographic parietal cortex exhibit hemispheric asymmetries similar to those seen in hemispatial neglect. Our findings suggest potential mechanisms underlying the behavioral deficits associated with this disorder. PMID:25589746
van Lamsweerde, Amanda E; Johnson, Jeffrey S
2017-07-01
Maintaining visual working memory (VWM) representations recruits a network of brain regions, including the frontal, posterior parietal, and occipital cortices; however, it is unclear to what extent the occipital cortex is engaged in VWM after sensory encoding is completed. Noninvasive brain stimulation data show that stimulation of this region can affect working memory (WM) during the early consolidation time period, but it remains unclear whether it does so by influencing the number of items that are stored or their precision. In this study, we investigated whether single-pulse transcranial magnetic stimulation (spTMS) to the occipital cortex during VWM consolidation affects the quantity or quality of VWM representations. In three experiments, we disrupted VWM consolidation with either a visual mask or spTMS to retinotopic early visual cortex. We found robust masking effects on the quantity of VWM representations up to 200 msec poststimulus offset and smaller, more variable effects on WM quality. Similarly, spTMS decreased the quantity of VWM representations, but only when it was applied immediately following stimulus offset. Like visual masks, spTMS also produced small and variable effects on WM precision. The disruptive effects of both masks and TMS were greatly reduced or entirely absent within 200 msec of stimulus offset. However, there was a reduction in swap rate across all time intervals, which may indicate a sustained role of the early visual cortex in maintaining spatial information.
Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex
Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank
2013-01-01
Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828
Perceptual learning selectively refines orientation representations in early visual cortex.
Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank
2012-11-21
Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Novice Interpretations of Visual Representations of Geosciences Data
NASA Astrophysics Data System (ADS)
Burkemper, L. K.; Arthurs, L.
2013-12-01
Past cognition research of individual's perception and comprehension of bar and line graphs are substantive enough that they have resulted in the generation of graph design principles and graph comprehension theories; however, gaps remain in our understanding of how people process visual representations of data, especially of geologic and atmospheric data. This pilot project serves to build on others' prior research and begin filling the existing gaps. The primary objectives of this pilot project include: (i) design a novel data collection protocol based on a combination of paper-based surveys, think-aloud interviews, and eye-tracking tasks to investigate student data handling skills of simple to complex visual representations of geologic and atmospheric data, (ii) demonstrate that the protocol yields results that shed light on student data handling skills, and (iii) generate preliminary findings upon which tentative but perhaps helpful recommendations on how to more effectively present these data to the non-scientist community and teach essential data handling skills. An effective protocol for the combined use of paper-based surveys, think-aloud interviews, and computer-based eye-tracking tasks for investigating cognitive processes involved in perceiving, comprehending, and interpreting visual representations of geologic and atmospheric data is instrumental to future research in this area. The outcomes of this pilot study provide the foundation upon which future more in depth and scaled up investigations can build. Furthermore, findings of this pilot project are sufficient for making, at least, tentative recommendations that can help inform (i) the design of physical attributes of visual representations of data, especially more complex representations, that may aid in improving students' data handling skills and (ii) instructional approaches that have the potential to aid students in more effectively handling visual representations of geologic and atmospheric data that they might encounter in a course, television news, newspapers and magazines, and websites. Such recommendations would also be the potential subject of future investigations and have the potential to impact the design features when data is presented to the public and instructional strategies not only in geoscience courses but also other science, technology, engineering, and mathematics (STEM) courses.
Images as Representations: Visual Sources on Education and Childhood in the Past
ERIC Educational Resources Information Center
Dekker, Jeroen J.H.
2015-01-01
The challenge of using images for the history of education and childhood will be addressed in this article by looking at them as representations. Central is the relationship between representations and reality. The focus is on the power of paintings as representations of aspects of realities. First the meaning of representation for images as…
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
Ostarek, Markus; Huettig, Falk
2017-03-01
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Spatial resolution in visual memory.
Ben-Shalom, Asaf; Ganel, Tzvi
2015-04-01
Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Task relevance modulates the cortical representation of feature conjunctions in the target template.
Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan
2017-07-03
Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
Opponent appetitive-aversive neural processes underlie predictive learning of pain relief.
Seymour, Ben; O'Doherty, John P; Koltzenburg, Martin; Wiech, Katja; Frackowiak, Richard; Friston, Karl; Dolan, Raymond
2005-09-01
Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Bates, Annwen E
2007-04-01
The article takes a hermeneutic approach to exploring a selection of visual representations of the African body in relation to the issue of HIV and AIDS in Africa. In particular, it argues that the trope of 'deficiency' ('lack'), wherein Africa is constructed as dirty, degenerate, decaying and dying, continues in visual representations aimed at a northern or UK audience. In contrast, examples of public health material aimed at a South African audience present a postcolonial counter-discourse where the African body is empowered rather than deficient. These two assumptions and their accompanying visuals parallel two differing narratives about HIV and AIDS in Africa. The article explores the ideological underpinnings of those narratives in four sections: 1) Paper-thin facts presents certain attitudes about Africa and the African body that have come into currency in relation to colonialism; 2) A matter of mor(t)ality examines the relationship between morality and the mortality of the African body; 3) The legacies endure analyses selected images aimed at a potential donor, UK audience with reference to the ideologies proposed in the previous sections; and 4) Wearing the T-shirt engages with the proposed counter-discourse and its visual representations, as evident in a selection of South African public health material.
A Framework for the Design of Effective Graphics for Scientific Visualization
NASA Technical Reports Server (NTRS)
Miceli, Kristina D.
1992-01-01
This proposal presents a visualization framework, based on a data model, that supports the production of effective graphics for scientific visualization. Visual representations are effective only if they augment comprehension of the increasing amounts of data being generated by modern computer simulations. These representations are created by taking into account the goals and capabilities of the scientist, the type of data to be displayed, and software and hardware considerations. This framework is embodied in an assistant-based visualization system to guide the scientist in the visualization process. This will improve the quality of the visualizations and decrease the time the scientist is required to spend in generating the visualizations. I intend to prove that such a framework will create a more productive environment for tile analysis and interpretation of large, complex data sets.
Cant, Jonathan S; Xu, Yaoda
2017-02-01
Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.
Poplu, Gérald; Ripoll, Hubert; Mavromatis, Sébastien; Baratgin, Jean
2008-09-01
The aim of this study was to determine what visual information expert soccer players encode when they are asked to make a decision. We used a repetition-priming paradigm to test the hypothesis that experts encode a soccer pattern's structure independently of the players' physical characteristics (i.e., posture and morphology). The participants were given either realistic (digital photos) or abstract (three-dimensional schematic representations) soccer game patterns. The results showed that the experts benefited from priming effects regardless of how abstract the stimuli were. This suggests that an abstract representation of a realistic pattern (i.e., one that does not include visual information related to the players'physical characteristics) is sufficient to activate experts'specific knowledge during decision making. These results seem to show that expert soccer players encode and store abstract representations of visual patterns in memory.
Refreshing memory traces: thinking of an item improves retrieval from visual working memory.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
2015-03-01
This article provides evidence that refreshing, a hypothetical attention-based process operating in working memory (WM), improves the accessibility of visual representations for recall. "Thinking of", one of several concurrently active representations, is assumed to refresh its trace in WM, protecting the representation from being forgotten. The link between refreshing and WM performance, however, has only been tenuously supported by empirical evidence. Here, we controlled which and how often individual items were refreshed in a color reconstruction task by presenting cues prompting participants to think of specific WM items during the retention interval. We show that the frequency with which an item is refreshed improves recall of this item from visual WM. Our study establishes a role of refreshing in recall from visual WM and provides a new method for studying the impact of refreshing on the amount of information we can keep accessible for ongoing cognition. © 2014 New York Academy of Sciences.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Cognitive Changes in Presymptomatic Parkinson’s Disease
2004-09-01
frontation naming (e.g., Boston Naming Test), verbal flu- Tramadol (analgesic) 1 ency (e.g., COWA), or memory for either verbal or visual- Benzodiazapine...underlie mental rotation deficits in PD. Men help elucidate the nature of these relationships. typically perform better than women on tests of mental
Unaware Processing of Tools in the Neural System for Object-Directed Action Representation.
Tettamanti, Marco; Conca, Francesca; Falini, Andrea; Perani, Daniela
2017-11-01
The hypothesis that the brain constitutively encodes observed manipulable objects for the actions they afford is still debated. Yet, crucial evidence demonstrating that, even in the absence of perceptual awareness, the mere visual appearance of a manipulable object triggers a visuomotor coding in the action representation system including the premotor cortex, has hitherto not been provided. In this fMRI study, we instantiated reliable unaware visual perception conditions by means of continuous flash suppression, and we tested in 24 healthy human participants (13 females) whether the visuomotor object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices is activated even under subliminal perceptual conditions. We found consistent activation in the target visuomotor cortices, both with and without perceptual awareness, specifically for pictures of manipulable versus non-manipulable objects. By means of a multivariate searchlight analysis, we also found that the brain activation patterns in this visuomotor network enabled the decoding of manipulable versus non-manipulable object picture processing, both with and without awareness. These findings demonstrate the intimate neural coupling between visual perception and motor representation that underlies manipulable object processing: manipulable object stimuli specifically engage the visuomotor object-directed action representation system, in a constitutive manner that is independent from perceptual awareness. This perceptuo-motor coupling endows the brain with an efficient mechanism for monitoring and planning reactions to external stimuli in the absence of awareness. SIGNIFICANCE STATEMENT Our brain constantly encodes the visual information that hits the retina, leading to a stimulus-specific activation of sensory and semantic representations, even for objects that we do not consciously perceive. Do these unconscious representations encompass the motor programming of actions that could be accomplished congruently with the objects' functions? In this fMRI study, we instantiated unaware visual perception conditions, by dynamically suppressing the visibility of manipulable object pictures with mondrian masks. Despite escaping conscious perception, manipulable objects activated an object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices. This demonstrates that visuomotor encoding occurs independently of conscious object perception. Copyright © 2017 the authors 0270-6474/17/3710712-13$15.00/0.
Representations of the Moon in Children's Literature: An Analysis of Written and Visual Text
ERIC Educational Resources Information Center
Trundle, Kathy Cabe; Troland, Thomas H.; Pritchard, T. Gail
2008-01-01
This review focused on the written and visual representation of the moon in 80 children's books, including Caldecott Medal and Honor books over the past 20 years. Results revealed that many of these books misrepresent the moon and even reinforce misconceptions about lunar phases. Teachers who use children's literature that misrepresents the moon…
ERIC Educational Resources Information Center
Al-Balushi, Sulaiman M.; Al-Hajri, Sheikha H.
2014-01-01
The purpose of the current study is to explore the impact of associating animations with concrete models on eleventh-grade students' comprehension of different visual representations in organic chemistry. The study used a post-test control group quasi-experimental design. The experimental group (N = 28) used concrete models, submicroscopic…
Children's Understanding of Globes as a Model of the Earth: A Problem of Contextualizing
ERIC Educational Resources Information Center
Ehrlen, Karin
2008-01-01
Visual representations play an important role in science teaching. The way in which visual representations may help children to acquire scientific concepts is a crucial test in the debate between constructivist and socio-cultural oriented researchers. In this paper, the question is addressed as a problem of how to contextualize conceptions and…
Functions of graphemic and phonemic codes in visual word-recognition.
Meyer, D E; Schvaneveldt, R W; Ruddy, M G
1974-03-01
Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.
'What' Is Happening in the Dorsal Visual Pathway.
Freud, Erez; Plaut, David C; Behrmann, Marlene
2016-10-01
The cortical visual system is almost universally thought to be segregated into two anatomically and functionally distinct pathways: a ventral occipitotemporal pathway that subserves object perception, and a dorsal occipitoparietal pathway that subserves object localization and visually guided action. Accumulating evidence from both human and non-human primate studies, however, challenges this binary distinction and suggests that regions in the dorsal pathway contain object representations that are independent of those in ventral cortex and that play a functional role in object perception. We review here the evidence implicating dorsal object representations, and we propose an account of the anatomical organization, functional contributions, and origins of these representations in the service of perception. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
The evaluative imaging of mental models - Visual representations of complexity
NASA Technical Reports Server (NTRS)
Dede, Christopher
1989-01-01
The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.
Using perceptual rules in interactive visualization
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Treinish, Lloyd A.
1994-05-01
In visualization, data are represented as variations in grayscale, hue, shape, and texture. They can be mapped to lines, surfaces, and glyphs, and can be represented statically or in animation. In modem visualization systems, the choices for representing data seem unlimited. This is both a blessing and a curse, however, since the visual impression created by the visualization depends critically on which dimensions are selected for representing the data (Bertin, 1967; Tufte, 1983; Cleveland, 1991). In modem visualization systems, the user can interactively select many different mapping and representation operations, and can interactively select processing operations (e.g., applying a color map), realization operations (e.g., generating geometric structures such as contours or streamlines), and rendering operations (e.g., shading or ray-tracing). The user can, for example, map data to a color map, then apply contour lines, then shift the viewing angle, then change the color map again, etc. In many systems, the user can vary the choices for each operation, selecting, for example, particular color maps, contour characteristics, and shading techniques. The hope is that this process will eventually converge on a visual representation which expresses the structure of the data and effectively communicates its message in a way that meets the user's goals. Sometimes, however, it results in visual representations which are confusing, misleading, and garish.
Understanding Deep Representations Learned in Modeling Users Likes.
Guntuku, Sharath Chandra; Zhou, Joey Tianyi; Roy, Sujoy; Lin, Weisi; Tsang, Ivor W
2016-08-01
Automatically understanding and discriminating different users' liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users' likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by ∼ 15 %-20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.
Instruments of scientific visual representation in atomic databases
NASA Astrophysics Data System (ADS)
Kazakov, V. V.; Kazakov, V. G.; Meshkov, O. I.
2017-10-01
Graphic tools of spectral data representation provided by operating information systems on atomic spectroscopy—ASD NIST, VAMDC, SPECTR-W3, and Electronic Structure of Atoms—for the support of scientific-research and human-resource development are presented. Such tools of visual representation of scientific data as those of the spectrogram and Grotrian diagram plotting are considered. The possibility of comparative analysis of the experimentally obtained spectra and reference spectra of atomic systems formed according to the database of a resource is described. The access techniques to the mentioned graphic tools are presented.
Wittgenstein running: neural mechanisms of collective intentionality and we-mode.
Becchio, Cristina; Bertone, Cesare
2004-03-01
In this paper we discuss the problem of the neural conditions of shared attitudes and intentions: which neural mechanisms underlie "we-mode" processes or serve as precursors to such processes? Neurophysiological and neuropsychological evidence suggests that in different areas of the brain neural representations are shared by several individuals. This situation, on the one hand, creates a potential problem for correct attribution. On the other hand, it may provide the conditions for shared attitudes and intentions.
NASA Astrophysics Data System (ADS)
Zou, Xueli
In the past three decades, physics education research has primarily focused on student conceptual understanding; little work has been conducted to investigate student difficulties in problem solving. In cognitive science and psychology, however, extensive studies have explored the differences in problem solving between experts and naive students. A major finding indicates that experts often apply qualitative representations in problem solving, but that novices use an equation-centered method. This dissertation describes investigations into the use of multiple representations and visualizations in student understanding and problem solving with the concepts of work and energy. A multiple-representation strategy was developed to help students acquire expertise in solving work-energy problems. In this approach, a typical work-energy problem is considered as a physical process. The process is first described in words-the verbal representation of the process. Next, a sketch or a picture, called a pictorial representation, is used to represent the process. This is followed by work-energy bar charts-a physical representation of the same processes. Finally, this process is represented mathematically by using a generalized work-energy equation. In terms of the multiple representations, the goal of solving a work- energy problem is to represent the physical process the more intuitive pictorial and diagrammatic physical representations. Ongoing assessment of student learning indicates that this multiple-representation technique is more effective than standard instruction methods in student problem solving. visualize this difficult-to-understand concept, a guided- inquiry learning activity using a pair of model carts and an experiment problem using a sandbag were developed. Assessment results have shown that these research-based materials are effective in helping students visualize this concept and give a pictorial idea of ``where the kinetic energy goes'' during inelastic collisions. The research and curriculum development was conducted in the context of the introductory calculus-based physics course. Investigations were carried out using common physics education research tools, including open-ended surveys, written test questions, and individual student interviews.
Bressler, David W.; Silver, Michael A.
2010-01-01
Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961
Tagliabue, Michele; McIntyre, Joseph
2013-01-01
Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations. PMID:23861903
Visual Attention Modulates Insight versus Analytic Solving of Verbal Problems
ERIC Educational Resources Information Center
Wegbreit, Ezra; Suzuki, Satoru; Grabowecky, Marcia; Kounios, John; Beeman, Mark
2012-01-01
Behavioral and neuroimaging findings indicate that distinct cognitive and neural processes underlie solving problems with sudden insight. Moreover, people with less focused attention sometimes perform better on tests of insight and creative problem solving. However, it remains unclear whether different states of attention, within individuals,…
Looking Compensates for the Distance between Mother and Infant Chimpanzee
ERIC Educational Resources Information Center
Okamoto-Barth, Sanae; Tanaka, Masayuki; Kawai, Nobuyuki; Tomonaga, Masaki
2007-01-01
The development of visual interaction between mother and infant has received much attention in developmental psychology, not only in humans, but also in non-human primates. Recently, comparative developmental approaches have investigated whether the mechanisms that underlie these behaviors are common in primates. In the present study, we focused…
Lin, Zhicheng; He, Sheng
2012-10-25
Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.
Congenital blindness limits allocentric to egocentric switching ability.
Ruggiero, Gennaro; Ruotolo, Francesco; Iachini, Tina
2018-03-01
Many everyday spatial activities require the cooperation or switching between egocentric (subject-to-object) and allocentric (object-to-object) spatial representations. The literature on blind people has reported that the lack of vision (congenital blindness) may limit the capacity to represent allocentric spatial information. However, research has mainly focused on the selective involvement of egocentric or allocentric representations, not the switching between them. Here we investigated the effect of visual deprivation on the ability to switch between spatial frames of reference. To this aim, congenitally blind (long-term visual deprivation), blindfolded sighted (temporary visual deprivation) and sighted (full visual availability) participants were compared on the Ego-Allo switching task. This task assessed the capacity to verbally judge the relative distances between memorized stimuli in switching (from egocentric-to-allocentric: Ego-Allo; from allocentric-to-egocentric: Allo-Ego) and non-switching (only-egocentric: Ego-Ego; only-allocentric: Allo-Allo) conditions. Results showed a difficulty in congenitally blind participants when switching from allocentric to egocentric representations, not when the first anchor point was egocentric. In line with previous results, a deficit in processing allocentric representations in non-switching conditions also emerged. These findings suggest that the allocentric deficit in congenital blindness may determine a difficulty in simultaneously maintaining and combining different spatial representations. This deficit alters the capacity to switch between reference frames specifically when the first anchor point is external and not body-centered.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model
Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2017-01-01
Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation. PMID:28248996
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu
2018-06-01
Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.
Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar
2017-01-01
Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.
Multimodal representation of limb endpoint position in the posterior parietal cortex.
Shi, Ying; Apker, Gregory; Buneo, Christopher A
2013-04-01
Understanding the neural representation of limb position is important for comprehending the control of limb movements and the maintenance of body schema, as well as for the development of neuroprosthetic systems designed to replace lost limb function. Multiple subcortical and cortical areas contribute to this representation, but its multimodal basis has largely been ignored. Regarding the parietal cortex, previous results suggest that visual information about arm position is not strongly represented in area 5, although these results were obtained under conditions in which animals were not using their arms to interact with objects in their environment, which could have affected the relative weighting of relevant sensory signals. Here we examined the multimodal basis of limb position in the superior parietal lobule (SPL) as monkeys reached to and actively maintained their arm position at multiple locations in a frontal plane. On half of the trials both visual and nonvisual feedback of the endpoint of the arm were available, while on the other trials visual feedback was withheld. Many neurons were tuned to arm position, while a smaller number were modulated by the presence/absence of visual feedback. Visual modulation generally took the form of a decrease in both firing rate and variability with limb vision and was associated with more accurate decoding of position at the population level under these conditions. These findings support a multimodal representation of limb endpoint position in the SPL but suggest that visual signals are relatively weakly represented in this area, and only at the population level.
ERIC Educational Resources Information Center
Altmann, Gerry T. M.; Kamide, Yuki
2009-01-01
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either "The woman will put the glass…
Can a Picture Ruin a Thousand Words? The Effects of Visual Resources in Exam Questions
ERIC Educational Resources Information Center
Crisp, Victoria; Sweiry, Ezekiel
2006-01-01
Background: When an exam question is read, a mental representation of the task is formed in each student's mind. This processing can be affected by features such as visual resources (e.g. pictures, diagrams, photographs, tables), which can come to dominate the mental representation due to their salience. Purpose: The aim of this research was to…
A Probabilistic Clustering Theory of the Organization of Visual Short-Term Memory
ERIC Educational Resources Information Center
Orhan, A. Emin; Jacobs, Robert A.
2013-01-01
Experimental evidence suggests that the content of a memory for even a simple display encoded in visual short-term memory (VSTM) can be very complex. VSTM uses organizational processes that make the representation of an item dependent on the feature values of all displayed items as well as on these items' representations. Here, we develop a…
ERIC Educational Resources Information Center
Kribbs, Elizabeth E.; Rogowsky, Beth A.
2016-01-01
Mathematics word-problems continue to be an insurmountable challenge for many middle school students. Educators have used pictorial and schematic illustrations within the classroom to help students visualize these problems. However, the data shows that pictorial representations can be more harmful than helpful in that they only display objects or…
ERIC Educational Resources Information Center
Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.
2018-01-01
Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux…
Fundamental Visual Representations of Social Cognition in ASD
2015-10-01
autism spectrum disorder as assessed by high density electrical mapping...C., Russo, N. N., & Foxe, J. J. (2013). Atypical cortical representation of peripheral visual space in children with an autism spectrum disorder . European Journal of Neuroscience, 38(1), 2125-2138. ...Sensory processing issues are prevalent in the autism spectrum (ASD) population, and sensory adaptation can be a potential biomarker - a
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
Inagaki, Mikio; Fujita, Ichiro
2011-07-13
Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.
Representation of Letter Position in Spelling: Evidence from Acquired Dysgraphia
Fischer-Baum, Simon; McCloskey, Michael; Rapp, Brenda
2010-01-01
The graphemic representations that underlie spelling performance must encode not only the identities of the letters in a word, but also the positions of the letters. This study investigates how letter position information is represented. We present evidence from two dysgraphic individuals, CM and LSS, who perseverate letters when spelling: that is, letters from previous spelling responses intrude into subsequent responses. The perseverated letters appear more often than expected by chance in the same position in the previous and subsequent responses. We used these errors to address the question of how letter position is represented in spelling. In a series of analyses we determined how often the perseveration errors produced maintain position as defined by a number of alternative theories of letter position encoding proposed in the literature. The analyses provide strong evidence that the grapheme representations used in spelling encode letter position such that position is represented in a graded manner based on distance from both edges of the word. PMID:20378104
Predictive Coding in Area V4: Dynamic Shape Discrimination under Partial Occlusion
Choi, Hannah; Pasupathy, Anitha; Shea-Brown, Eric
2018-01-01
The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion. After a delay, V4 responses combine both feedforward input and feedback signals from the PFC; the latter reflect predictions made by PFC about the visual stimulus underlying V4 activity. We find that this model captures key features of V4 and PFC dynamics observed in experiments. Specifically, PFC responses are strongest for occluded stimuli and delayed responses in V4 are less sensitive to occlusion, supporting our hypothesis that the feedback signals from PFC underlie robust discrimination of occluded shapes. Thus, our study proposes that area V4 and PFC participate in hierarchical inference, with feedback signals encoding top-down predictions about occluded shapes. PMID:29566355
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Hollingworth, Andrew; Hwang, Seongmin
2013-01-01
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection. PMID:24018723
ERIC Educational Resources Information Center
Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan
2006-01-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…
The Role of Prediction In Perception: Evidence From Interrupted Visual Search
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
2014-01-01
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
Scene perception and the visual control of travel direction in navigating wood ants
Collett, Thomas S.; Lent, David D.; Graham, Paul
2014-01-01
This review reflects a few of Mike Land's many and varied contributions to visual science. In it, we show for wood ants, as Mike has done for a variety of animals, including readers of this piece, what can be learnt from a detailed analysis of an animal's visually guided eye, head or body movements. In the case of wood ants, close examination of their body movements, as they follow visually guided routes, is starting to reveal how they perceive and respond to their visual world and negotiate a path within it. We describe first some of the mechanisms that underlie the visual control of their paths, emphasizing that vision is not the ant's only sense. In the second part, we discuss how remembered local shape-dependent and global shape-independent features of a visual scene may interact in guiding the ant's path. PMID:24395962
Visual representation of scientific information.
Wong, Bang
2011-02-15
Great technological advances have enabled researchers to generate an enormous amount of data. Data analysis is replacing data generation as the rate-limiting step in scientific research. With this wealth of information, we have an opportunity to understand the molecular causes of human diseases. However, the unprecedented scale, resolution, and variety of data pose new analytical challenges. Visual representation of data offers insights that can lead to new understanding, whether the purpose is analysis or communication. This presentation shows how art, design, and traditional illustration can enable scientific discovery. Examples will be drawn from the Broad Institute's Data Visualization Initiative, aimed at establishing processes for creating informative visualization models.
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
An evaluation of space time cube representation of spatiotemporal patterns.
Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine
2009-01-01
Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.
Amit, Elinor; Hoeflin, Caitlyn; Hamzah, Nada; Fedorenko, Evelina
2017-01-01
Humans rely on at least two modes of thought: verbal (inner speech) and visual (imagery). Are these modes independent, or does engaging in one entail engaging in the other? To address this question, we performed a behavioral and an fMRI study. In the behavioral experiment, participants received a prompt and were asked to either silently generate a sentence or create a visual image in their mind. They were then asked to judge the vividness of the resulting representation, and of the potentially accompanying representation in the other format. In the fMRI experiment, participants had to recall sentences or images (that they were familiarized with prior to the scanning session) given prompts, or read sentences and view images, in the control, perceptual, condition. An asymmetry was observed between inner speech and visual imagery. In particular, inner speech was engaged to a greater extent during verbal than visual thought, but visual imagery was engaged to a similar extent during both modes of thought. Thus, it appears that people generate more robust verbal representations during deliberate inner speech compared to when their intent is to visualize. However, they generate visual images regardless of whether their intent is to visualize or to think verbally. One possible interpretation of these results is that visual thinking is somehow primary, given the relatively late emergence of verbal abilities during human development and in the evolution of our species. PMID:28323162
Amit, Elinor; Hoeflin, Caitlyn; Hamzah, Nada; Fedorenko, Evelina
2017-05-15
Humans rely on at least two modes of thought: verbal (inner speech) and visual (imagery). Are these modes independent, or does engaging in one entail engaging in the other? To address this question, we performed a behavioral and an fMRI study. In the behavioral experiment, participants received a prompt and were asked to either silently generate a sentence or create a visual image in their mind. They were then asked to judge the vividness of the resulting representation, and of the potentially accompanying representation in the other format. In the fMRI experiment, participants had to recall sentences or images (that they were familiarized with prior to the scanning session) given prompts, or read sentences and view images, in the control, perceptual, condition. An asymmetry was observed between inner speech and visual imagery. In particular, inner speech was engaged to a greater extent during verbal than visual thought, but visual imagery was engaged to a similar extent during both modes of thought. Thus, it appears that people generate more robust verbal representations during deliberate inner speech compared to when their intent is to visualize. However, they generate visual images regardless of whether their intent is to visualize or to think verbally. One possible interpretation of these results is that visual thinking is somehow primary, given the relatively late emergence of verbal abilities during human development and in the evolution of our species. Copyright © 2017 Elsevier Inc. All rights reserved.
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-05-15
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.
Redish, A. David
2016-01-01
When rats come to a decision point, they sometimes pause and look back and forth as if deliberating over the choice; at other times, they proceed as if they have already made their decision. In the 1930s, this pause-and-look behaviour was termed ‘vicarious trial and error’ (VTE), with the implication that the rat was ‘thinking about the future’. The discovery in 2007 that the firing of hippocampal place cells gives rise to alternating representations of each of the potential path options in a serial manner during VTE suggested a possible neural mechanism that could underlie the representations of future outcomes. More-recent experiments examining VTE in rats suggest that there are direct parallels to human processes of deliberative decision making, working memory and mental time travel. PMID:26891625
The prefrontal cortex: categories, concepts and cognition.
Miller, Earl K; Freedman, David J; Wallis, Jonathan D
2002-01-01
The ability to generalize behaviour-guiding principles and concepts from experience is key to intelligent, goal-directed behaviour. It allows us to deal efficiently with a complex world and to adapt readily to novel situations. We review evidence that the prefrontal cortex-the cortical area that reaches its greatest elaboration in primates-plays a central part in acquiring and representing this information. The prefrontal cortex receives highly processed information from all major forebrain systems, and neurophysiological studies suggest that it synthesizes this into representations of learned task contingencies, concepts and task rules. In short, the prefrontal cortex seems to underlie our internal representations of the 'rules of the game'. This may provide the necessary foundation for the complex behaviour of primates, in whom this structure is most elaborate. PMID:12217179
Implications on visual apperception: energy, duration, structure and synchronization.
Bókkon, I; Vimal, Ram Lakhan Pandey
2010-07-01
Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.
Geometric Representations for Discrete Fourier Transforms
NASA Technical Reports Server (NTRS)
Cambell, C. W.
1986-01-01
Simple geometric representations show symmetry and periodicity of discrete Fourier transforms (DFT's). Help in visualizing requirements for storing and manipulating transform value in computations. Representations useful in any number of dimensions, but particularly in one-, two-, and three-dimensional cases often encountered in practice.
Negative emotion boosts quality of visual working memory representation.
Xie, Weizhen; Zhang, Weiwei
2016-08-01
Negative emotion impacts a variety of cognitive processes, including working memory (WM). The present study investigated whether negative emotion modulated WM capacity (quantity) or resolution (quality), 2 independent limits on WM storage. In Experiment 1, observers tried to remember several colors over 1-s delay and then recalled the color of a randomly picked memory item by clicking a best-matching color on a continuous color wheel. On each trial, before the visual WM task, 1 of 3 emotion conditions (negative, neutral, or positive) was induced by having observers to rate the valence of an International Affective Picture System image. Visual WM under negative emotion showed enhanced resolution compared with neutral and positive conditions, whereas the number of retained representations was comparable across the 3 emotion conditions. These effects were generalized to closed-contour shapes in Experiment 2. To isolate the locus of these effects, Experiment 3 adopted an iconic memory version of the color recall task by eliminating the 1-s retention interval. No significant change in the quantity or quality of iconic memory was observed, suggesting that the resolution effects in the first 2 experiments were critically dependent on the need to retain memory representations over a short period of time. Taken together, these results suggest that negative emotion selectively boosts visual WM quality, supporting the dissociable nature quantitative and qualitative aspects of visual WM representation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Attention biases visual activity in visual short-term memory.
Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina
2014-07-01
In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.
Retinotopically specific reorganization of visual cortex for tactile pattern recognition
Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.
2009-01-01
Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999
The Anatomical and Functional Organization of the Human Visual Pulvinar
Pinsk, Mark A.; Kastner, Sabine
2015-01-01
The pulvinar is the largest nucleus in the primate thalamus and contains extensive, reciprocal connections with visual cortex. Although the anatomical and functional organization of the pulvinar has been extensively studied in old and new world monkeys, little is known about the organization of the human pulvinar. Using high-resolution functional magnetic resonance imaging at 3 T, we identified two visual field maps within the ventral pulvinar, referred to as vPul1 and vPul2. Both maps contain an inversion of contralateral visual space with the upper visual field represented ventrally and the lower visual field represented dorsally. vPul1 and vPul2 border each other at the vertical meridian and share a representation of foveal space with iso-eccentricity lines extending across areal borders. Additional, coarse representations of contralateral visual space were identified within ventral medial and dorsal lateral portions of the pulvinar. Connectivity analyses on functional and diffusion imaging data revealed a strong distinction in thalamocortical connectivity between the dorsal and ventral pulvinar. The two maps in the ventral pulvinar were most strongly connected with early and extrastriate visual areas. Given the shared eccentricity representation and similarity in cortical connectivity, we propose that these two maps form a distinct visual field map cluster and perform related functions. The dorsal pulvinar was most strongly connected with parietal and frontal areas. The functional and anatomical organization observed within the human pulvinar was similar to the organization of the pulvinar in other primate species. SIGNIFICANCE STATEMENT The anatomical organization and basic response properties of the visual pulvinar have been extensively studied in nonhuman primates. Yet, relatively little is known about the functional and anatomical organization of the human pulvinar. Using neuroimaging, we found multiple representations of visual space within the ventral human pulvinar and extensive topographically organized connectivity with visual cortex. This organization is similar to other nonhuman primates and provides additional support that the general organization of the pulvinar is consistent across the primate phylogenetic tree. These results suggest that the human pulvinar, like other primates, is well positioned to regulate corticocortical communication. PMID:26156987
Cebolla, Ana M.; Petieau, Mathieu; Cevallos, Carlos; Leroy, Axelle; Dan, Bernard; Cheron, Guy
2015-01-01
In order to characterize the neural signature of a motor imagery (MI) task, the present study investigates for the first time the oscillation characteristics including both of the time-frequency measurements, event related spectral perturbation and intertrial coherence (ITC) underlying the variations in the temporal measurements (event related potentials, ERP) directly related to a MI task. We hypothesize that significant variations in both of the time-frequency measurements underlie the specific changes in the ERP directly related to MI. For the MI task, we chose a simple everyday task (throwing a tennis ball), that does not require any particular motor expertise, set within the controlled virtual reality scenario of a tennis court. When compared to the rest condition a consistent, long-lasting negative fronto-central ERP wave was accompanied by significant changes in both time frequency measurements suggesting long-lasting cortical activity reorganization. The ERP wave was characterized by two peaks at about 300 ms (N300) and 1000 ms (N1000). The N300 component was centrally localized on the scalp and was accompanied by significant phase consistency in the delta brain rhythms in the contralateral central scalp areas. The N1000 component spread wider centrally and was accompanied by a significant power decrease (or event related desynchronization) in low beta brain rhythms localized in fronto-precentral and parieto-occipital scalp areas and also by a significant power increase (or event related synchronization) in theta brain rhythms spreading fronto-centrally. During the transition from N300 to N1000, a contralateral alpha (mu) as well as post-central and parieto-theta rhythms occurred. The visual representation of movement formed in the minds of participants might underlie a top-down process from the fronto-central areas which is reflected by the amplitude changes observed in the fronto-central ERPs and by the significant phase synchrony in contralateral fronto-central delta and contralateral central mu to parietal theta presented here. PMID:26648903
ERIC Educational Resources Information Center
Papageorgiou, George; Amariotakis, Vasilios; Spiliotopoulou, Vasiliki
2017-01-01
The main objective of this work is to analyse the visual representations (VRs) of the microcosm depicted in nine Greek secondary chemistry school textbooks of the last three decades in order to construct a systemic network for their main conceptual framework and to evaluate the contribution of each one of the resulting categories to the network.…
ERIC Educational Resources Information Center
Eshach, Haim
2010-01-01
The starting point of the present research is the following question: since we live in an age that makes increasing use of visual representations of all sorts, is not the visual representation a learner constructs a window into his/her understanding of what is or is not being learned? Following this direction of inquiry, the present preliminary…
ERIC Educational Resources Information Center
Savinainen, Antti; Mäkynen, Asko; Nieminen, Pasi; Viiri, Jouni
2017-01-01
This paper presents a research-based teaching-learning sequence (TLS) that focuses on the notion of interaction in teaching Newton's third law (N3 law) which is, as earlier studies have shown, a challenging topic for students to learn. The TLS made systematic use of a visual representation tool--an interaction diagram (ID)--highlighting…
ERIC Educational Resources Information Center
Pelletier, Caroline
2005-01-01
This paper compares the oral and visual representations which 12 to 13-year-old students produced in studying computer games as part of an English and Media course. It presents the arguments for studying multimodal texts as part of a literacy curriculum and then provides an overview of the games course devised by teachers and researchers. The…
ERIC Educational Resources Information Center
Longo, Palma J.
A long-term study was conducted to test the effectiveness of visual thinking networking (VTN), a new generation of knowledge representation strategies with 56 ninth grade earth science students. The recent findings about the brain's organization and processing conceptually ground VTN as a new cognitive tool used by learners when making their…
ERIC Educational Resources Information Center
Rhile, Ian J.
2014-01-01
Atomic orbitals are a theme throughout the undergraduate chemistry curriculum, and visualizing them has been a theme in this journal. Contour plots as isosurfaces or contour lines in a plane are the most familiar representations of the hydrogen wave functions. In these representations, a surface of a fixed value of the wave function ? is plotted…
ERIC Educational Resources Information Center
Boonen, Anton J. H.; Reed, Helen C.; Schoonenboom, Judith; Jolles, Jelle
2016-01-01
Non-routine word problem solving is an essential feature of the mathematical development of elementary school students worldwide. Many students experience difficulties in solving these problems due to erroneous problem comprehension. These difficulties could be alleviated by instructing students how to use visual representations that clarify the…
Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan
2006-10-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.
Direct neural pathways convey distinct visual information to Drosophila mushroom bodies
Vogt, Katrin; Aso, Yoshinori; Hige, Toshihide; Knapek, Stephan; Ichinose, Toshiharu; Friedrich, Anja B; Turner, Glenn C; Rubin, Gerald M; Tanimoto, Hiromu
2016-01-01
Previously, we demonstrated that visual and olfactory associative memories of Drosophila share mushroom body (MB) circuits (Vogt et al., 2014). Unlike for odor representation, the MB circuit for visual information has not been characterized. Here, we show that a small subset of MB Kenyon cells (KCs) selectively responds to visual but not olfactory stimulation. The dendrites of these atypical KCs form a ventral accessory calyx (vAC), distinct from the main calyx that receives olfactory input. We identified two types of visual projection neurons (VPNs) directly connecting the optic lobes and the vAC. Strikingly, these VPNs are differentially required for visual memories of color and brightness. The segregation of visual and olfactory domains in the MB allows independent processing of distinct sensory memories and may be a conserved form of sensory representations among insects. DOI: http://dx.doi.org/10.7554/eLife.14009.001 PMID:27083044
SEEING IS BELIEVING, AND BELIEVING IS SEEING
NASA Astrophysics Data System (ADS)
Dutrow, B. L.
2009-12-01
Geoscience disciplines are filled with visual displays of data. From the first cave drawings to remote imaging of our Planet, visual displays of information have been used to understand and interpret our discipline. As practitioners of the art, visuals comprise the core around which we write scholarly articles, teach our students and make every day decisions. The effectiveness of visual communication, however, varies greatly. For many visual displays, a significant amount of prior knowledge is needed to understand and interpret various representations. If this is missing, key components of communication fail. One common example is the use of animations to explain high density and typically complex data. Do animations effectively convey information, simply "wow an audience" or do they confuse the subject by using unfamiliar forms and representations? Prior knowledge impacts the information derived from visuals and when communicating with non-experts this factor is exacerbated. For example, in an advanced geology course fractures in a rock are viewed by petroleum engineers as conduits for fluid migration while geoscience students 'see' the minerals lining the fracture. In contrast, a lay audience might view these images as abstract art. Without specific and direct accompanying verbal or written communication such an image is viewed radically differently by disparate audiences. Experts and non-experts do not 'see' equivalent images. Each visual must be carefully constructed with it's communication task in mind. To enhance learning and communication at all levels by visual displays of data requires that we teach visual literacy as a portion of our curricula. As we move from one form of visual representation to another, our mental images are expanded as is our ability to see and interpret new visual forms thus promoting life-long learning. Visual literacy is key to communication in our visually rich discipline. What do you see?
Christophel, Thomas B; Allefeld, Carsten; Endisch, Christian; Haynes, John-Dylan
2018-06-01
Traditional views of visual working memory postulate that memorized contents are stored in dorsolateral prefrontal cortex using an adaptive and flexible code. In contrast, recent studies proposed that contents are maintained by posterior brain areas using codes akin to perceptual representations. An important question is whether this reflects a difference in the level of abstraction between posterior and prefrontal representations. Here, we investigated whether neural representations of visual working memory contents are view-independent, as indicated by rotation-invariance. Using functional magnetic resonance imaging and multivariate pattern analyses, we show that when subjects memorize complex shapes, both posterior and frontal brain regions maintain the memorized contents using a rotation-invariant code. Importantly, we found the representations in frontal cortex to be localized to the frontal eye fields rather than dorsolateral prefrontal cortices. Thus, our results give evidence for the view-independent storage of complex shapes in distributed representations across posterior and frontal brain regions.
Visual management of large scale data mining projects.
Shah, I; Hunter, L
2000-01-01
This paper describes a unified framework for visualizing the preparations for, and results of, hundreds of machine learning experiments. These experiments were designed to improve the accuracy of enzyme functional predictions from sequence, and in many cases were successful. Our system provides graphical user interfaces for defining and exploring training datasets and various representational alternatives, for inspecting the hypotheses induced by various types of learning algorithms, for visualizing the global results, and for inspecting in detail results for specific training sets (functions) and examples (proteins). The visualization tools serve as a navigational aid through a large amount of sequence data and induced knowledge. They provided significant help in understanding both the significance and the underlying biological explanations of our successes and failures. Using these visualizations it was possible to efficiently identify weaknesses of the modular sequence representations and induction algorithms which suggest better learning strategies. The context in which our data mining visualization toolkit was developed was the problem of accurately predicting enzyme function from protein sequence data. Previous work demonstrated that approximately 6% of enzyme protein sequences are likely to be assigned incorrect functions on the basis of sequence similarity alone. In order to test the hypothesis that more detailed sequence analysis using machine learning techniques and modular domain representations could address many of these failures, we designed a series of more than 250 experiments using information-theoretic decision tree induction and naive Bayesian learning on local sequence domain representations of problematic enzyme function classes. In more than half of these cases, our methods were able to perfectly discriminate among various possible functions of similar sequences. We developed and tested our visualization techniques on this application.
Modular Representation of Luminance Polarity In the Superficial Layers Of Primary Visual Cortex
Smith, Gordon B.; Whitney, David E.; Fitzpatrick, David
2016-01-01
Summary The spatial arrangement of luminance increments (ON) and decrements (OFF) falling on the retina provides a wealth of information used by central visual pathways to construct coherent representations of visual scenes. But how the polarity of luminance change is represented in the activity of cortical circuits remains unclear. Using wide-field epifluorescence and two-photon imaging we demonstrate a robust modular representation of luminance polarity (ON or OFF) in the superficial layers of ferret primary visual cortex. Polarity-specific domains are found with both uniform changes in luminance and single light/dark edges, and include neurons selective for orientation and direction of motion. The integration of orientation and polarity preference is evident in the selectivity and discrimination capabilities of most layer 2/3 neurons. We conclude that polarity selectivity is an integral feature of layer 2/3 neurons, ensuring that the distinction between light and dark stimuli is available for further processing in downstream extrastriate areas. PMID:26590348
Understanding visualization: a formal approach using category theory and semiotics.
Vickers, Paul; Faith, Joe; Rossiter, Nick
2013-06-01
This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Representational Distance Learning for Deep Neural Networks
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889
Representational Distance Learning for Deep Neural Networks.
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.
NASA Astrophysics Data System (ADS)
Arevalo, John; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.
Visual memory transformations in dyslexia.
Barnes, James; Hinkley, Lisa; Masters, Stuart; Boubert, Laura
2007-06-01
Representational Momentum refers to observers' distortion of recognition memory for pictures that imply motion because of an automatic mental process which extrapolates along the implied trajectory of the picture. Neuroimaging evidence suggests that activity in the magnocellular visual pathway is necessary for representational momentum to occur. It has been proposed that individuals with dyslexia have a magnocellular deficit, so it was hypothesised that these individuals would show reduced or absent representational momentum. In this study, 30 adults with dyslexia and 30 age-matched controls were compared on two tasks, one linear and one rotation, which had previously elicited the representational momentum effect. Analysis indicated significant differences in the performance of the two groups, with the dyslexia group having a reduced susceptibility to representational momentum in both linear and rotational directions. The findings highlight that deficits in temporal spatial processing may contribute to the perceptual profile of dyslexia.
Student Visual Communication of Evolution
NASA Astrophysics Data System (ADS)
Oliveira, Alandeom W.; Cook, Kristin
2017-06-01
Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images.
Roth, Zvi N
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.
Roth, Zvi N.
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455
The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition
ERIC Educational Resources Information Center
Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian
2009-01-01
DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…
The Generation and Maintenance of Visual Mental Images: Evidence from Image Type and Aging
ERIC Educational Resources Information Center
De Beni, Rossana; Pazzaglia, Francesca; Gardini, Simona
2007-01-01
Imagery is a multi-componential process involving different mental operations. This paper addresses whether separate processes underlie the generation, maintenance and transformation of mental images or whether these cognitive processes rely on the same mental functions. We also examine the influence of age on these mental operations for…
The Influence of Attentional Focus Instructions and Vision on Jump Height Performance
ERIC Educational Resources Information Center
Abdollahipour, Reza; Psotta, Rudolf; Land, William M.
2016-01-01
Purpose: Studies have suggested that the use of visual information may underlie the benefit associated with an external focus of attention. Recent studies exploring this connection have primarily relied on motor tasks that involve manipulation of an object (object projection). The present study examined whether vision influences the effect of…
ERIC Educational Resources Information Center
Jones, M. Gail; Minogue, James; Oppewal, Tom; Cook, Michelle P.; Broadwell, Bethany
2006-01-01
Science instruction is typically highly dependent on visual representations of scientific concepts that are communicated through textbooks, teacher presentations, and computer-based multimedia materials. Little is known about how students with visual impairments access and interpret these types of visually-dependent instructional materials. This…
Visual Hemispheric Specialization: A Computational Theory.
1985-10-31
representations. Presumably, the intepretation of these representations makes use of other modules that are also recruited in language processing. If...8217. . . ... . ., ~ --. , . ,,.-,-. -. .. -~ ~ .. ’ SS .. . . . . .... . . . r . . .. -. ... .- . .. . . ,--. _ _ FILMED ~DTIC -.. -. . -•. . . . . .. .-. - . ,.-. . .•..-. .-... .... . . ,-.. ... , - , - .- . .... ..- ,.,- .. ,.
Robertson, Frances
2013-09-01
This paper examines codes of representation in nineteenth century engineering in Britain in relation to broader visual culture. While engineering was promoted as a rational public enterprise through techniques of spectacular display, engineers who aimed to be taken seriously in the intellectual hierarchies of science had to negotiate suitable techniques for making and using images. These difficulties can be examined in the visual practices that mark the career of engineer David Kirkaldy. Beginning as a bravura naval draughtsman, Kirkaldy later negotiated his status as a serious experimenter in material testing science, changing his style of representation that at first sight seems to be in line with the 'objective' strategy in science of getting nature to represent herself. And although Kirkaldy maintained a range of visual styles to communicate with different audiences, making rhetorical use of several technologies of inscription, from hand drawing to photography, nevertheless, his work does in fact demonstrate new uses of the concept of objectivity in representation when up against the practices of engineering. While these might seem merely pragmatic in comparison to the ethical weight given to the discourse of objective representation in science, in the messy world of collapsing bridges and law suits, virtuous engineers had to develop various forms of visual knowledge as practical science. This was not 'applied science' but a differentiated form of enquiry whose complexities hold as much interest as the better known visual cultures of late nineteenth century science or art. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lin, Zhicheng; He, Sheng
2012-01-01
Object identities (“what”) and their spatial locations (“where”) are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects (“files”) within the reference frame (“cabinet”) are orderly coded relative to the frame. PMID:23104817
Neural dynamics of image representation in the primary visual cortex
Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing
2013-01-01
Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role of computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption on computational theories of efficient coding (Olshausen and Field 1996), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al. 2006, von der Heydt et al. 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex. PMID:22944076
The Role of Eye Movement Driven Attention in Functional Strabismic Amblyopia
2015-01-01
Strabismic amblyopia “blunt vision” is a developmental anomaly that affects binocular vision and results in lowered visual acuity. Strabismus is a term for a misalignment of the visual axes and is usually characterized by impaired ability of the strabismic eye to take up fixation. Such impaired fixation is usually a function of the temporally and spatially impaired binocular eye movements that normally underlie binocular shifts in visual attention. In this review, we discuss how abnormal eye movement function in children with misaligned eyes influences the development of normal binocular visual attention and results in deficits in visual function such as depth perception. We also discuss how eye movement function deficits in adult amblyopia patients can also lead to other abnormalities in visual perception. Finally, we examine how the nonamblyopic eye of an amblyope is also affected in strabismic amblyopia. PMID:25838941
Three-dimensional visual feature representation in the primary visual cortex
Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi
2011-01-01
In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a doubt about the conventional columnar view of orientation representation, although more experimental data are needed. PMID:21724370
Three-dimensional visual feature representation in the primary visual cortex.
Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi
2011-12-01
In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of the same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a doubt about the conventional columnar view of orientation representation, although more experimental data are needed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hickey, Clayton; Peelen, Marius V
2017-08-02
Theories of reinforcement learning and approach behavior suggest that reward can increase the perceptual salience of environmental stimuli, ensuring that potential predictors of outcome are noticed in the future. However, outcome commonly follows visual processing of the environment, occurring even when potential reward cues have long disappeared. How can reward feedback retroactively cause now-absent stimuli to become attention-drawing in the future? One possibility is that reward and attention interact to prime lingering visual representations of attended stimuli that sustain through the interval separating stimulus and outcome. Here, we test this idea using multivariate pattern analysis of fMRI data collected from male and female humans. While in the scanner, participants searched for examples of target categories in briefly presented pictures of cityscapes and landscapes. Correct task performance was followed by reward feedback that could randomly have either high or low magnitude. Analysis showed that high-magnitude reward feedback boosted the lingering representation of target categories while reducing the representation of nontarget categories. The magnitude of this effect in each participant predicted the behavioral impact of reward on search performance in subsequent trials. Other analyses show that sensitivity to reward-as expressed in a personality questionnaire and in reactivity to reward feedback in the dopaminergic midbrain-predicted reward-elicited variance in lingering target and nontarget representations. Credit for rewarding outcome thus appears to be assigned to the target representation, causing the visual system to become sensitized for similar objects in the future. SIGNIFICANCE STATEMENT How do reward-predictive visual stimuli become salient and attention-drawing? In the real world, reward cues precede outcome and reward is commonly received long after potential predictors have disappeared. How can the representation of environmental stimuli be affected by outcome that occurs later in time? Here, we show that reward acts on lingering representations of environmental stimuli that sustain through the interval between stimulus and outcome. Using naturalistic scene stimuli and multivariate pattern analysis of fMRI data, we show that reward boosts the representation of attended objects and reduces the representation of unattended objects. This interaction of attention and reward processing acts to prime vision for stimuli that may serve to predict outcome. Copyright © 2017 the authors 0270-6474/17/377297-08$15.00/0.
Memory as Perception of the Past: Compressed Time inMind and Brain.
Howard, Marc W
2018-02-01
In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Harley, H E; Roitblat, H L; Nachtigall, P E
1996-04-01
A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.
Visual shape perception as Bayesian inference of 3D object-centered shape representations.
Erdogan, Goker; Jacobs, Robert A
2017-11-01
Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Visualization of Morse connection graphs for topologically rich 2D vector fields.
Szymczak, Andrzej; Sipeki, Levente
2013-12-01
Recent advances in vector field topologymake it possible to compute its multi-scale graph representations for autonomous 2D vector fields in a robust and efficient manner. One of these representations is a Morse Connection Graph (MCG), a directed graph whose nodes correspond to Morse sets, generalizing stationary points and periodic trajectories, and arcs - to trajectories connecting them. While being useful for simple vector fields, the MCG can be hard to comprehend for topologically rich vector fields, containing a large number of features. This paper describes a visual representation of the MCG, inspired by previous work on graph visualization. Our approach aims to preserve the spatial relationships between the MCG arcs and nodes and highlight the coherent behavior of connecting trajectories. Using simulations of ocean flow, we show that it can provide useful information on the flow structure. This paper focuses specifically on MCGs computed for piecewise constant (PC) vector fields. In particular, we describe extensions of the PC framework that make it more flexible and better suited for analysis of data on complex shaped domains with a boundary. We also describe a topology simplification scheme that makes our MCG visualizations less ambiguous. Despite the focus on the PC framework, our approach could also be applied to graph representations or topological skeletons computed using different methods.
Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking
Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715
Online multi-modal robust non-negative dictionary learning for visual tracking.
Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.
Students' Development of Representational Competence Through the Sense of Touch
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Balachandran, Sadhana
2017-06-01
Electromagnetism is an umbrella encapsulating several different concepts like electric current, electric fields and forces, and magnetic fields and forces, among other topics. However, a number of studies in the past have highlighted the poor conceptual understanding of electromagnetism concepts by students even after instruction. This study aims to identify novel forms of "hands-on" instruction that can result in representational competence and conceptual gain. Specifically, this study aimed to identify if the use of visuohaptic simulations can have an effect on student representations of electromagnetic-related concepts. The guiding questions is How do visuohaptic simulations influence undergraduate students' representations of electric forces? Participants included nine undergraduate students from science, technology, or engineering backgrounds who participated in a think-aloud procedure while interacting with a visuohaptic simulation. The think-aloud procedure was divided in three stages, a prediction stage, a minimally visual haptic stage, and a visually enhanced haptic stage. The results of this study suggest that students' accurately characterized and represented the forces felt around a particle, line, and ring charges either in the prediction stage, a minimally visual haptic stage or the visually enhanced haptic stage. Also, some students accurately depicted the three-dimensional nature of the field for each configuration in the two stages that included a tactile mode, where the point charge was the most challenging one.
Critical Visual Literacy: The New Phase of Applied Linguistics in the Era of Mobile Technology
ERIC Educational Resources Information Center
Dos Santos Costa, Giselda; Xavier, Antonio Carlos
2016-01-01
In our society, which is full of images, visual representations and visual experiences of all kinds, there is a paradoxically significant degree of visual illiteracy. Despite the importance of developing specific visual skills, visual literacy is not a priority in school curriculum (Spalter & van Dam, 2008). This work aims at (1) emphasising…
Transformed Neural Pattern Reinstatement during Episodic Memory Retrieval.
Xiao, Xiaoqian; Dong, Qi; Gao, Jiahong; Men, Weiwei; Poldrack, Russell A; Xue, Gui
2017-03-15
Contemporary models of episodic memory posit that remembering involves the reenactment of encoding processes. Although encoding-retrieval similarity has been consistently reported and linked to memory success, the nature of neural pattern reinstatement is poorly understood. Using high-resolution fMRI on human subjects, our results obtained clear evidence for item-specific pattern reinstatement in the frontoparietal cortex, even when the encoding-retrieval pairs shared no perceptual similarity. No item-specific pattern reinstatement was found in the ventral visual cortex. Importantly, the brain regions and voxels carrying item-specific representation differed significantly between encoding and retrieval, and the item specificity for encoding-retrieval similarity was smaller than that for encoding or retrieval, suggesting different nature of representations between encoding and retrieval. Moreover, cross-region representational similarity analysis suggests that the encoded representation in the ventral visual cortex was reinstated in the frontoparietal cortex during retrieval. Together, these results suggest that, in addition to reinstatement of the originally encoded pattern in the brain regions that perform encoding processes, retrieval may also involve the reinstatement of a transformed representation of the encoded information. These results emphasize the constructive nature of memory retrieval that helps to serve important adaptive functions. SIGNIFICANCE STATEMENT Episodic memory enables humans to vividly reexperience past events, yet how this is achieved at the neural level is barely understood. A long-standing hypothesis posits that memory retrieval involves the faithful reinstatement of encoding-related activity. We tested this hypothesis by comparing the neural representations during encoding and retrieval. We found strong pattern reinstatement in the frontoparietal cortex, but not in the ventral visual cortex, that represents visual details. Critically, even within the same brain regions, the nature of representation during retrieval was qualitatively different from that during encoding. These results suggest that memory retrieval is not a faithful replay of past event but rather involves additional constructive processes to serve adaptive functions. Copyright © 2017 the authors 0270-6474/17/372986-13$15.00/0.
Exploring the Phase Space of a System of Differential Equations: Different Mathematical Registers
ERIC Educational Resources Information Center
Dana-Picard, Thierry; Kidron, Ivy
2008-01-01
We describe and analyze a situation involving symbolic representation and graphical visualization of the solution of a system of two linear differential equations, using a computer algebra system. Symbolic solution and graphical representation complement each other. Graphical representation helps to understand the behavior of the symbolic…
Think Spatial: The Representation in Mental Rotation Is Nonvisual
ERIC Educational Resources Information Center
Liesefeld, Heinrich R.; Zimmer, Hubert D.
2013-01-01
For mental rotation, introspection, theories, and interpretations of experimental results imply a certain type of mental representation, namely, visual mental images. Characteristics of the rotated representation can be examined by measuring the influence of stimulus characteristics on rotational speed. If the amount of a given type of information…
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro
2014-05-01
Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Revisioning fat lesbian subjects in contemporary lesbian periodicals.
Snider, Stefanie
2010-01-01
It is difficult to find a visual representation of any fat individual, let alone a queer woman, that is not denigrating and oppressive in conventional media outlets and contemporary visual culture. But even as the negative imagery of fat individuals has expanded over the past forty years in mainstream distribution channels, fat-positive imagery has come to the fore within many feminist and lesbian publications during this same time frame. This article looks at the strategies of representation taken by three contemporary United States lesbian feminist periodicals in visualizing fat and lesbian women within their pages since the 1980s.
Stephens, Robert P
2011-01-01
Addiction films have been shaped by the internal demands of a commercial medium. Specifically, melodrama, as a genre, has defined the limits of the visual representation of addiction. Similarly, the process of intermedialization has tended to induce a metamorphosis that shapes disparate narratives with diverse goals into a generic filmic form and substantially alters the meanings of the texts. Ultimately, visual representations shape public perceptions of addiction in meaningful ways, privileging a moralistic understanding of drug addiction that makes a complex issue visually uncomplicated by reinforcing "common sense" ideas of moral failure and redemption. Copyright © 2011 Informa Healthcare USA, Inc.
Joint representation of translational and rotational components of optic flow in parietal cortex
Sunkara, Adhira; DeAngelis, Gregory C.; Angelaki, Dora E.
2016-01-01
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals. PMID:27095846
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
The Rise and Fall of Priming: How Visual Exposure Shapes Cortical Representations of Objects
Zago, Laure; Fenske, Mark J.; Aminoff, Elissa; Bar, Moshe
2006-01-01
How does the amount of time for which we see an object influence the nature and content of its cortical representation? To address this question, we varied the duration of initial exposure to visual objects and then measured functional magnetic resonance imaging (fMRI) signal and behavioral performance during a subsequent repeated presentation of these objects. We report a novel ‘rise-and-fall’ pattern relating exposure duration and the corresponding magnitude of fMRI cortical signal. Compared with novel objects, repeated objects elicited maximal cortical response reduction when initially presented for 250 ms. Counter-intuitively, initially seeing an object for a longer duration significantly reduced the magnitude of this effect. This ‘rise-and-fall’ pattern was also evident for the corresponding behavioral priming. To account for these findings, we propose that the earlier interval of an exposure to a visual stimulus results in a fine-tuning of the cortical response, while additional exposure promotes selection of a subset of key features for continued representation. These two independent mechanisms complement each other in shaping object representations with experience. PMID:15716471
Common Neural Representations for Visually Guided Reorientation and Spatial Imagery
Vass, Lindsay K.; Epstein, Russell A.
2017-01-01
Abstract Spatial knowledge about an environment can be cued from memory by perception of a visual scene during active navigation or by imagination of the relationships between nonvisible landmarks, such as when providing directions. It is not known whether these different ways of accessing spatial knowledge elicit the same representations in the brain. To address this issue, we scanned participants with fMRI, while they performed a judgment of relative direction (JRD) task that required them to retrieve real-world spatial relationships in response to either pictorial or verbal cues. Multivoxel pattern analyses revealed several brain regions that exhibited representations that were independent of the cues to access spatial memory. Specifically, entorhinal cortex in the medial temporal lobe and the retrosplenial complex (RSC) in the medial parietal lobe coded for the heading assumed on a particular trial, whereas the parahippocampal place area (PPA) contained information about the starting location of the JRD. These results demonstrate the existence of spatial representations in RSC, ERC, and PPA that are common to visually guided navigation and spatial imagery. PMID:26759482
Hemifield columns co-opt ocular dominance column structure in human achiasma.
Olman, Cheryl A; Bao, Pinglei; Engel, Stephen A; Grant, Andrea N; Purington, Chris; Qiu, Cheng; Schallmo, Michael-Paul; Tjan, Bosco S
2018-01-01
In the absence of an optic chiasm, visual input to the right eye is represented in primary visual cortex (V1) in the right hemisphere, while visual input to the left eye activates V1 in the left hemisphere. Retinotopic mapping In V1 reveals that in each hemisphere left and right visual hemifield representations are overlaid (Hoffmann et al., 2012). To explain how overlapping hemifield representations in V1 do not impair vision, we tested the hypothesis that visual projections from nasal and temporal retina create interdigitated left and right visual hemifield representations in V1, similar to the ocular dominance columns observed in neurotypical subjects (Victor et al., 2000). We used high-resolution fMRI at 7T to measure the spatial distribution of responses to left- and right-hemifield stimulation in one achiasmic subject. T 2 -weighted 2D Spin Echo images were acquired at 0.8mm isotropic resolution. The left eye was occluded. To the right eye, a presentation of flickering checkerboards alternated between the left and right visual fields in a blocked stimulus design. The participant performed a demanding orientation-discrimination task at fixation. A general linear model was used to estimate the preference of voxels in V1 to left- and right-hemifield stimulation. The spatial distribution of voxels with significant preference for each hemifield showed interdigitated clusters which densely packed V1 in the right hemisphere. The spatial distribution of hemifield-preference voxels in the achiasmic subject was stable between two days of testing and comparable in scale to that of human ocular dominance columns. These results are the first in vivo evidence showing that visual hemifield representations interdigitate in achiasmic V1 following a similar developmental course to that of ocular dominance columns in V1 with intact optic chiasm. Copyright © 2017 Elsevier Inc. All rights reserved.
Limanowski, Jakub; Blankenburg, Felix
2016-03-02
The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual information. These brain areas thus likely integrate visual and proprioceptive information into a flexible multisensory body representation. Copyright © 2016 the authors 0270-6474/16/362582-08$15.00/0.
Scheltema, Emma; Reay, Stephen; Piper, Greg
2018-01-01
This practice led research project explored visual representation through illustrations designed to communicate often complex medical information for different users within Auckland City Hospital, New Zealand. Media and tools were manipulated to affect varying degrees of naturalism or abstraction from reality in the creation of illustrations for a variety of real-life clinical projects, and user feedback on illustration preference gathered from both medical professionals and patients. While all users preferred the most realistic representations of medical information from the illustrations presented, patients often favoured illustrations that depicted a greater amount of information than professionals suggested was necessary.
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-01-01
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-05-05
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
Shift in speed selectivity of visual cortical neurons: A neural basis of perceived motion contrast
Li, Chao-Yi; Lei, Jing-Jiang; Yao, Hai-Shan
1999-01-01
The perceived speed of motion in one part of the visual field is influenced by the speed of motion in its surrounding fields. Little is known about the cellular mechanisms causing this phenomenon. Recordings from mammalian visual cortex revealed that speed preference of the cortical cells could be changed by displaying a contrast speed in the field surrounding the cell’s classical receptive field. The neuron’s selectivity shifted to prefer faster speed if the contextual surround motion was set at a relatively lower speed, and vice versa. These specific center–surround interactions may underlie the perceptual enhancement of speed contrast between adjacent fields. PMID:10097161
How Chinese Semantics Capability Improves Interpretation in Visual Communication
ERIC Educational Resources Information Center
Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung
2017-01-01
A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…
Maynard, Ashley E; Greenfield, Patricia M; Childs, Carla P
2015-02-01
We studied the implications of social change for cognitive development in a Maya community in Chiapas, Mexico, over 43 years. The same procedures were used to collect data in 1969-1970, 1991, and 2012-once in each generation. The goal was to understand the implications of weaving, schooling and participation in a commercial economy for the development of visual pattern representation. In 2012, our participants consisted of 133 boys and girls descended from participants in the prior two generations. Procedures consisted of placing colored sticks in a wooden frame to make striped patterns, some familiar (Zinacantec woven patterns) and some novel (created by the investigators). Following Greenfield (2009), we hypothesised that the development of commerce and the expansion of formal schooling would influence children's representations. Her theory postulates that these factors move human development towards cognitive abstraction and skill in dealing with novelty. Furthermore, the theory posits that whatever sociodemographic variable is changing most rapidly functions as the primary motor for developmental change. From 1969 to 1991, the rapid development of a commercial economy drove visual representation in the hypothesised directions. From 1991 to 2012, the rapid expansion of schooling drove visual representation in the hypothesised directions. © 2015 International Union of Psychological Science.
Gravity Influences the Visual Representation of Object Tilt in Parietal Cortex
Angelaki, Dora E.
2014-01-01
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an “earth-vertical” direction. PMID:25339732
Three-Dimensional Dispaly Of Document Set
Lantrip, David B.; Pennock, Kelly A.; Pottier, Marc C.; Schur, Anne; Thomas, James J.; Wise, James A.
2003-06-24
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
Three-dimensional display of document set
Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA
2006-09-26
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may e transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
Three-dimensional display of document set
Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA
2001-10-02
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
Three-dimensional display of document set
Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA; York, Jeremy [Bothell, WA
2009-06-30
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
[Visual representation of biological structures in teaching material].
Morato, M A; Struchiner, M; Bordoni, E; Ricciardi, R M
1998-01-01
Parameters must be defined for presenting and handling scientific information presented in the form of teaching materials. Through library research and consultations with specialists in the health sciences and in graphic arts and design, this study undertook a comparative description of the first examples of scientific illustrations of anatomy and the evolution of visual representations of knowledge on the cell. The study includes significant examples of illustrations which served as elements of analysis.
The Influence of Similarity on Visual Working Memory Representations
Lin, Po-Han; Luck, Steven J.
2007-01-01
In verbal memory, similarity between items in memory often leads to interference and impaired memory performance. The present study sought to determine whether analogous interference effects would be observed in visual working memory by varying the similarity of the to-be-remembered objects in a color change-detection task. Instead of leading to interference and impaired performance, increased similarity among the items being held in memory led to improved performance. Moreover, when two similar colors were presented along with one dissimilar color, memory performance was better for the similar colors than for the dissimilar color. Similarity produced better performance even when the objects were presented sequentially and even when memory for the first item in the sequence was tested. These findings show that similarity does not lead to interference between representations in visual working memory. Instead, similarity may lead to improved task performance, possibly due to increased stability or precision of the memory representations during maintenance. PMID:19430536
Accessing long-term memory representations during visual change detection.
Beck, Melissa R; van Lamsweerde, Amanda E
2011-04-01
In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.
Time perception of visual motion is tuned by the motor representation of human actions
Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry
2013-01-01
Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903
Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.
Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija
2015-08-01
A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter.
Deep learning of orthographic representations in baboons.
Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.
2017-01-01
Visually guided behaviour at its sensitivity limit relies on single-photon responses originating in a small number of rod photoreceptors. For decades, researchers have debated the neural mechanisms and noise sources that underlie this striking sensitivity. To address this question, we need to understand the constraints arising from the retinal output signals provided by distinct retinal ganglion cell types. It has recently been shown in the primate retina that On and Off parasol ganglion cells, the cell types likely to underlie light detection at the absolute visual threshold, differ fundamentally not only in response polarity, but also in the way they handle single-photon responses originating in rods. The On pathway provides the brain with a thresholded, low-noise readout and the Off pathway with a noisy, linear readout. We outline the mechanistic basis of these different coding strategies and analyse their implications for detecting the weakest light signals. We show that high-fidelity, nonlinear signal processing in the On pathway comes with costs: more single-photon responses are lost and their propagation is delayed compared with the Off pathway. On the other hand, the responses of On ganglion cells allow better intensity discrimination compared with the Off ganglion cell responses near visual threshold. This article is part of the themed issue ‘Vision in dim light’. PMID:28193818
Takeshita, Daisuke; Smeds, Lina; Ala-Laurila, Petri
2017-04-05
Visually guided behaviour at its sensitivity limit relies on single-photon responses originating in a small number of rod photoreceptors. For decades, researchers have debated the neural mechanisms and noise sources that underlie this striking sensitivity. To address this question, we need to understand the constraints arising from the retinal output signals provided by distinct retinal ganglion cell types. It has recently been shown in the primate retina that On and Off parasol ganglion cells, the cell types likely to underlie light detection at the absolute visual threshold, differ fundamentally not only in response polarity, but also in the way they handle single-photon responses originating in rods. The On pathway provides the brain with a thresholded, low-noise readout and the Off pathway with a noisy, linear readout. We outline the mechanistic basis of these different coding strategies and analyse their implications for detecting the weakest light signals. We show that high-fidelity, nonlinear signal processing in the On pathway comes with costs: more single-photon responses are lost and their propagation is delayed compared with the Off pathway. On the other hand, the responses of On ganglion cells allow better intensity discrimination compared with the Off ganglion cell responses near visual threshold.This article is part of the themed issue 'Vision in dim light'. © 2017 The Authors.
ERIC Educational Resources Information Center
Anderson, Barton L.
2007-01-01
There has been a growing interest in understanding the computations involved in the processes underlying visual segmentation and interpolation in conditions of occlusion. P. J. Kellman, P. Garrigan, T. F. Shipley, and B. P. Keane and M. K. Albert defended the view that identical contour interpolation mechanisms underlie modal and amodal…
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
The ventral visual pathway: an expanded neural framework for the processing of object quality.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer
2013-01-01
Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. Published by Elsevier Ltd.
Influence of Immersive Human Scale Architectural Representation on Design Judgment
NASA Astrophysics Data System (ADS)
Elder, Rebecca L.
Unrealistic visual representation of architecture within our existing environments have lost all reference to the human senses. As a design tool, visual and auditory stimuli can be utilized to determine human's perception of design. This experiment renders varying building inputs within different sites, simulated with corresponding immersive visual and audio sensory cues. Introducing audio has been proven to influence the way a person perceives a space, yet most inhabitants rely strictly on their sense of vision to make design judgments. Though not as apparent, users prefer spaces that have a better quality of sound and comfort. Through a series of questions, we can begin to analyze whether a design is fit for both an acoustic and visual environment.
2015-10-01
overview visualization to help clinicians identify patients that are changing and inserted these indices into the sepsis specific decision support...visualization, 4) Created a sepsis identification visualization tool to help clinicians identify patients headed for septic shock, and 5) Generated a...5 Sepsis Visualization
Visual and Vestibular Determinants of Perceived Eye-Level
NASA Technical Reports Server (NTRS)
Cohen, Malcolm Martin
2003-01-01
Both gravitational and optical sources of stimulation combine to determine the perceived elevations of visual targets. The ways in which these sources of stimulation combine with one another in operational aeronautical environments are critical for pilots to make accurate judgments of the relative altitudes of other aircraft and of their own altitude relative to the terrain. In a recent study, my colleagues and I required eighteen observers to set visual targets at their apparent horizon while they experienced various levels of G(sub z) in the human centrifuge at NASA-Ames Research Center. The targets were viewed in darkness and also against specific background optical arrays that were oriented at various angles with respect to the vertical; target settings were lowered as Gz was increased; this effect was reduced when the background optical array was visible. Also, target settings were displaced in the direction that the background optical array was pitched. Our results were attributed to the combined influences of otolith-oculomotor mechanisms that underlie the elevator illusion and visual-oculomotor mechanisms (optostatic responses) that underlie the perceptual effects of viewing pitched optical arrays that comprise the background. In this paper, I present a mathematical model that describes the independent and combined effects of G(sub z) intensity and the orientation and structure of background optical arrays; the model predicts quantitative deviations from normal accurate perceptions of target localization under a variety of conditions. Our earlier experimental results and the mathematical model are described in some detail, and the effects of viewing specific optical arrays under various gravitational-inertial conditions encountered in aeronautical environments are discussed.
Remembering the past and imagining the future
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2009-01-01
The neural mechanisms underlying spatial cognition are modelled, integrating neuronal, systems and behavioural data, and addressing the relationships between long-term memory, short-term memory and imagery, and between egocentric and allocentric and visual and idiothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory as egocentric parietal representations driven by perception, retrieval and imagery, and modulated by directed attention. Both encoding and retrieval/ imagery require translation between egocentric and allocentric representations, mediated by posterior parietal and retrosplenial areas and utilizing head direction representations in Papez’s circuit. Thus hippocampus effectively indexes information by real or imagined location, while Papez’s circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows “spatial updating” of representations, while prefrontal simulated motor efference allows mental exploration. The alternating temporo-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs. PMID:17500630
Parts, Cavities, and Object Representation in Infancy
ERIC Educational Resources Information Center
Hayden, Angela; Bhatt, Ramesh S.; Kangas, Ashley; Zieber, Nicole
2011-01-01
Part representation is not only critical to object perception but also plays a key role in a number of basic visual cognition functions, such as figure-ground segregation, allocation of attention, and memory for shapes. Yet, virtually nothing is known about the development of part representation. If parts are fundamental components of object shape…
Orienting Attention to Sound Object Representations Attenuates Change Deafness
ERIC Educational Resources Information Center
Backer, Kristina C.; Alain, Claude
2012-01-01
According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet…
ERIC Educational Resources Information Center
Zagumny, Lisa; Richey, Amanda B.
2013-01-01
In this critical discourse analysis of six high-school world geography textbooks, we explore how constructions and representations of North Africa and Southwest Asia have served to reinforce Orientalist discourse in formal curriculum. Visual and written representations in these textbooks were overwhelmingly confounded by a traditional/modern…