Congenital blindness limits allocentric to egocentric switching ability.
Ruggiero, Gennaro; Ruotolo, Francesco; Iachini, Tina
2018-03-01
Many everyday spatial activities require the cooperation or switching between egocentric (subject-to-object) and allocentric (object-to-object) spatial representations. The literature on blind people has reported that the lack of vision (congenital blindness) may limit the capacity to represent allocentric spatial information. However, research has mainly focused on the selective involvement of egocentric or allocentric representations, not the switching between them. Here we investigated the effect of visual deprivation on the ability to switch between spatial frames of reference. To this aim, congenitally blind (long-term visual deprivation), blindfolded sighted (temporary visual deprivation) and sighted (full visual availability) participants were compared on the Ego-Allo switching task. This task assessed the capacity to verbally judge the relative distances between memorized stimuli in switching (from egocentric-to-allocentric: Ego-Allo; from allocentric-to-egocentric: Allo-Ego) and non-switching (only-egocentric: Ego-Ego; only-allocentric: Allo-Allo) conditions. Results showed a difficulty in congenitally blind participants when switching from allocentric to egocentric representations, not when the first anchor point was egocentric. In line with previous results, a deficit in processing allocentric representations in non-switching conditions also emerged. These findings suggest that the allocentric deficit in congenital blindness may determine a difficulty in simultaneously maintaining and combining different spatial representations. This deficit alters the capacity to switch between reference frames specifically when the first anchor point is external and not body-centered.
Ekstrom, Arne D.; Arnold, Aiden E. G. F.; Iaria, Giuseppe
2014-01-01
While the widely studied allocentric spatial representation holds a special status in neuroscience research, its exact nature and neural underpinnings continue to be the topic of debate, particularly in humans. Here, based on a review of human behavioral research, we argue that allocentric representations do not provide the kind of map-like, metric representation one might expect based on past theoretical work. Instead, we suggest that almost all tasks used in past studies involve a combination of egocentric and allocentric representation, complicating both the investigation of the cognitive basis of an allocentric representation and the task of identifying a brain region specifically dedicated to it. Indeed, as we discuss in detail, past studies suggest numerous brain regions important to allocentric spatial memory in addition to the hippocampus, including parahippocampal, retrosplenial, and prefrontal cortices. We thus argue that although allocentric computations will often require the hippocampus, particularly those involving extracting details across temporally specific routes, the hippocampus is not necessary for all allocentric computations. We instead suggest that a non-aggregate network process involving multiple interacting brain areas, including hippocampus and extra-hippocampal areas such as parahippocampal, retrosplenial, prefrontal, and parietal cortices, better characterizes the neural basis of spatial representation during navigation. According to this model, an allocentric representation does not emerge from the computations of a single brain region (i.e., hippocampus) nor is it readily decomposable into additive computations performed by separate brain regions. Instead, an allocentric representation emerges from computations partially shared across numerous interacting brain regions. We discuss our non-aggregate network model in light of existing data and provide several key predictions for future experiments. PMID:25346679
Albouy, Geneviève; Fogel, Stuart; Pottiez, Hugo; Nguyen, Vo An; Ray, Laura; Lungu, Ovidiu; Carrier, Julie; Robertson, Edwin; Doyon, Julien
2013-01-01
Motor sequence learning is known to rely on more than a single process. As the skill develops with practice, two different representations of the sequence are formed: a goal representation built under spatial allocentric coordinates and a movement representation mediated through egocentric motor coordinates. This study aimed to explore the influence of daytime sleep (nap) on consolidation of these two representations. Through the manipulation of an explicit finger sequence learning task and a transfer protocol, we show that both allocentric (spatial) and egocentric (motor) representations of the sequence can be isolated after initial training. Our results also demonstrate that nap favors the emergence of offline gains in performance for the allocentric, but not the egocentric representation, even after accounting for fatigue effects. Furthermore, sleep-dependent gains in performance observed for the allocentric representation are correlated with spindle density during non-rapid eye movement (NREM) sleep of the post-training nap. In contrast, performance on the egocentric representation is only maintained, but not improved, regardless of the sleep/wake condition. These results suggest that motor sequence memory acquisition and consolidation involve distinct mechanisms that rely on sleep (and specifically, spindle) or simple passage of time, depending respectively on whether the sequence is performed under allocentric or egocentric coordinates. PMID:23300993
ERIC Educational Resources Information Center
Ciaramelli, Elisa; Rosenbaum, R. Shayna; Solcz, Stephanie; Levine, Brian; Moscovitch, Morris
2010-01-01
The ability to navigate in a familiar environment depends on both an intact mental representation of allocentric spatial information and the integrity of systems supporting complementary egocentric representations. Although the hippocampus has been implicated in learning new allocentric spatial information, converging evidence suggests that the…
ERIC Educational Resources Information Center
Viczko, Jeremy; Sergeeva, Valya; Ray, Laura B.; Owen, Adrian M.; Fogel, Stuart M.
2018-01-01
Sleep facilitates the consolidation (i.e., enhancement) of simple, explicit (i.e., conscious) motor sequence learning (MSL). MSL can be dissociated into egocentric (i.e., motor) or allocentric (i.e., spatial) frames of reference. The consolidation of the allocentric memory representation is sleep-dependent, whereas the egocentric consolidation…
Ciaramelli, Elisa; Rosenbaum, R Shayna; Solcz, Stephanie; Levine, Brian; Moscovitch, Morris
2010-05-01
The ability to navigate in a familiar environment depends on both an intact mental representation of allocentric spatial information and the integrity of systems supporting complementary egocentric representations. Although the hippocampus has been implicated in learning new allocentric spatial information, converging evidence suggests that the posterior parietal cortex (PPC) might support egocentric representations. To date, however, few studies have examined long-standing egocentric representations of environments learned long ago. Here we tested 7 patients with focal lesions in PPC and 12 normal controls in remote spatial memory tasks, including 2 tasks reportedly reliant on allocentric representations (distance and proximity judgments) and 2 tasks reportedly reliant on egocentric representations (landmark sequencing and route navigation; see Rosenbaum, Ziegler, Winocur, Grady, & Moscovitch, 2004). Patients were unimpaired in distance and proximity judgments. In contrast, they all failed in route navigation, and left-lesioned patients also showed marginally impaired performance in landmark sequencing. Patients' subjective experience associated with navigation was impoverished and disembodied compared with that of the controls. These results suggest that PPC is crucial for accessing remote spatial memories within an egocentric reference frame that enables both navigation and reexperiencing. Additionally, PPC was found to be necessary to implement specific aspects of allocentric navigation with high demands on spontaneous retrieval. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Remembering the past and imagining the future
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2009-01-01
The neural mechanisms underlying spatial cognition are modelled, integrating neuronal, systems and behavioural data, and addressing the relationships between long-term memory, short-term memory and imagery, and between egocentric and allocentric and visual and idiothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory as egocentric parietal representations driven by perception, retrieval and imagery, and modulated by directed attention. Both encoding and retrieval/ imagery require translation between egocentric and allocentric representations, mediated by posterior parietal and retrosplenial areas and utilizing head direction representations in Papez’s circuit. Thus hippocampus effectively indexes information by real or imagined location, while Papez’s circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows “spatial updating” of representations, while prefrontal simulated motor efference allows mental exploration. The alternating temporo-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs. PMID:17500630
Assessing the mental frame syncing in the elderly: a virtual reality protocol.
Serino, Silvia; Cipresso, Pietro; Gaggioli, Andrea; Riva, Giuseppe
2014-01-01
Decline in spatial memory in the elderly is often underestimated, and it is crucial to fully investigate the cognitive underpinnings of early spatial impairment. A virtual reality-based procedure was developed to assess deficit in the "mental frame syncing", namely the cognitive ability that allows an effective orientation by synchronizing the allocentric view-point independent representation with the allocentric view-point dependent representation. A pilot study was carried out to evaluate abilities in the mental frame syncing in a sample of 16 elderly participants. Preliminary results indicated that the general cognitive functioning was associated with the ability in the synchronization between these two allocentric references frames.
Egocentric-updating during navigation facilitates episodic memory retrieval.
Gomez, Alice; Rousset, Stéphane; Baciu, Monica
2009-11-01
Influential models suggest that spatial processing is essential for episodic memory [O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. London: Oxford University Press]. However, although several types of spatial relations exist, such as allocentric (i.e. object-to-object relations), egocentric (i.e. static object-to-self relations) or egocentric updated on navigation information (i.e. self-to-environment relations in a dynamic way), usually only allocentric representations are described as potentially subserving episodic memory [Nadel, L., & Moscovitch, M. (1998). Hippocampal contributions to cortical plasticity. Neuropharmacology, 37(4-5), 431-439]. This study proposes to confront the allocentric representation hypothesis with an egocentric updated with self-motion representation hypothesis. In the present study, we explored retrieval performance in relation to these two types of spatial processing levels during learning. Episodic remembering has been assessed through Remember responses in a recall and in a recognition task, combined with a "Remember-Know-Guess" paradigm [Gardiner, J. M. (2001). Episodic memory and autonoetic consciousness: A first-person approach. Philosophical Transactions of the Royal Society B: Biological Sciences, 356(1413), 1351-1361] to assess the autonoetic level of responses. Our results show that retrieval performance was significantly higher when encoding was performed in the egocentric-updated condition. Although egocentric updated with self-motion and allocentric representations are not mutually exclusive, these results suggest that egocentric updating processing facilitates remember responses more than allocentric processing. The results are discussed according to Burgess and colleagues' model of episodic memory [Burgess, N., Becker, S., King, J. A., & O'Keefe, J. (2001). Memory for events and their spatial context: models and experiments. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 356(1413), 1493-1503].
Serino, Silvia; Morganti, Francesca; Di Stefano, Fabio; Riva, Giuseppe
2015-01-01
Several studies have pointed out that egocentric and allocentric spatial impairments are one of the earliest manifestations of Alzheimer's Disease (AD). It is less clear how a break in the continuous interaction between these two representations may be a crucial marker to detect patients who are at risk to develop dementia. The main objective of this study is to compare the performances of participants suffering from amnestic mild cognitive impairment (aMCI group), patients with AD (AD group) and a control group (CG), using a virtual reality (VR)-based procedure for assessing the abilities in encoding, storing and syncing different spatial representations. In the first task, participants were required to indicate on a real map the position of the object they had memorized, while in the second task they were invited to retrieve its position from an empty version of the same virtual room, starting from a different position. The entire procedure was repeated across three different trials, depending on the object location in the encoding phase. Our finding showed that aMCI patients performed significantly more poorly in the third trial of the first task, showing a deficit in the ability to encode and store an allocentric viewpoint independent representation. On the other hand, AD patients performed significantly more poorly when compared to the CG in the second task, indicating a specific impairment in storing an allocentric viewpoint independent representation and then syncing it with the allocentric viewpoint dependent representation. Furthermore, data suggested that these impairments are not a product of generalized cognitive decline or of general decay in spatial abilities, but instead may reflect a selective deficit in the spatial organization Overall, these findings provide an initial insight into the cognitive underpinnings of amnestic impairment in aMCI and AD patient exploiting the potentiality of VR.
Serino, Silvia; Morganti, Francesca; Di Stefano, Fabio; Riva, Giuseppe
2015-01-01
Several studies have pointed out that egocentric and allocentric spatial impairments are one of the earliest manifestations of Alzheimer’s Disease (AD). It is less clear how a break in the continuous interaction between these two representations may be a crucial marker to detect patients who are at risk to develop dementia. The main objective of this study is to compare the performances of participants suffering from amnestic mild cognitive impairment (aMCI group), patients with AD (AD group) and a control group (CG), using a virtual reality (VR)-based procedure for assessing the abilities in encoding, storing and syncing different spatial representations. In the first task, participants were required to indicate on a real map the position of the object they had memorized, while in the second task they were invited to retrieve its position from an empty version of the same virtual room, starting from a different position. The entire procedure was repeated across three different trials, depending on the object location in the encoding phase. Our finding showed that aMCI patients performed significantly more poorly in the third trial of the first task, showing a deficit in the ability to encode and store an allocentric viewpoint independent representation. On the other hand, AD patients performed significantly more poorly when compared to the CG in the second task, indicating a specific impairment in storing an allocentric viewpoint independent representation and then syncing it with the allocentric viewpoint dependent representation. Furthermore, data suggested that these impairments are not a product of generalized cognitive decline or of general decay in spatial abilities, but instead may reflect a selective deficit in the spatial organization Overall, these findings provide an initial insight into the cognitive underpinnings of amnestic impairment in aMCI and AD patient exploiting the potentiality of VR. PMID:26042034
As the world turns: short-term human spatial memory in egocentric and allocentric coordinates.
Banta Lavenex, Pamela; Lecci, Sandro; Prêtre, Vincent; Brandner, Catherine; Mazza, Christian; Pasquier, Jérôme; Lavenex, Pierre
2011-05-16
We aimed to determine whether human subjects' reliance on different sources of spatial information encoded in different frames of reference (i.e., egocentric versus allocentric) affects their performance, decision time and memory capacity in a short-term spatial memory task performed in the real world. Subjects were asked to play the Memory game (a.k.a. the Concentration game) without an opponent, in four different conditions that controlled for the subjects' reliance on egocentric and/or allocentric frames of reference for the elaboration of a spatial representation of the image locations enabling maximal efficiency. We report experimental data from young adult men and women, and describe a mathematical model to estimate human short-term spatial memory capacity. We found that short-term spatial memory capacity was greatest when an egocentric spatial frame of reference enabled subjects to encode and remember the image locations. However, when egocentric information was not reliable, short-term spatial memory capacity was greater and decision time shorter when an allocentric representation of the image locations with respect to distant objects in the surrounding environment was available, as compared to when only a spatial representation encoding the relationships between the individual images, independent of the surrounding environment, was available. Our findings thus further demonstrate that changes in viewpoint produced by the movement of images placed in front of a stationary subject is not equivalent to the movement of the subject around stationary images. We discuss possible limitations of classical neuropsychological and virtual reality experiments of spatial memory, which typically restrict the sensory information normally available to human subjects in the real world. Copyright © 2011 Elsevier B.V. All rights reserved.
Bisby, James A; King, John A; Brewin, Chris R; Burgess, Neil; Curran, H Valerie
2010-08-01
A dual representation model of intrusive memory proposes that personally experienced events give rise to two types of representation: an image-based, egocentric representation based on sensory-perceptual features; and a more abstract, allocentric representation that incorporates spatiotemporal context. The model proposes that intrusions reflect involuntary reactivation of egocentric representations in the absence of a corresponding allocentric representation. We tested the model by investigating the effect of alcohol on intrusive memories and, concurrently, on egocentric and allocentric spatial memory. With a double-blind independent group design participants were administered alcohol (.4 or .8 g/kg) or placebo. A virtual environment was used to present objects and test recognition memory from the same viewpoint as presentation (tapping egocentric memory) or a shifted viewpoint (tapping allocentric memory). Participants were also exposed to a trauma video and required to detail intrusive memories for 7 days, after which explicit memory was assessed. There was a selective impairment of shifted-view recognition after the low dose of alcohol, whereas the high dose induced a global impairment in same-view and shifted-view conditions. Alcohol showed a dose-dependent inverted "U"-shaped effect on intrusions, with only the low dose increasing the number of intrusions, replicating previous work. When same-view recognition was intact, decrements in shifted-view recognition were associated with increases in intrusions. The differential effect of alcohol on intrusive memories and on same/shifted-view recognition support a dual representation model in which intrusions might reflect an imbalance between two types of memory representation. These findings highlight important clinical implications, given alcohol's involvement in real-life trauma. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
The role of egocentric and allocentric abilities in Alzheimer's disease: a systematic review.
Serino, Silvia; Cipresso, Pietro; Morganti, Francesca; Riva, Giuseppe
2014-07-01
A great effort has been made to identify crucial cognitive markers that can be used to characterize the cognitive profile of Alzheimer's disease (AD). Because topographical disorientation is one of the earliest clinical manifestation of AD, an increasing number of studies have investigated the spatial deficits in this clinical population. In this systematic review, we specifically focused on experimental studies investigating allocentric and egocentric deficits to understand which spatial cognitive processes are differentially impaired in the different stages of the disease. First, our results highlighted that spatial deficits appear in the earliest stages of the disease. Second, a need for a more ecological assessment of spatial functions will be presented. Third, our analysis suggested that a prevalence of allocentric impairment exists. Specifically, two selected studies underlined that a more specific impairment is found in the translation between the egocentric and allocentric representations. In this perspective, the implications for future research and neurorehabilitative interventions will be discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Chen, Ying; Byrne, Patrick; Crawford, J Douglas
2011-01-01
Allocentric cues can be used to encode locations in visuospatial memory, but it is not known how and when these representations are converted into egocentric commands for behaviour. Here, we tested the influence of different memory intervals on reach performance toward targets defined in either egocentric or allocentric coordinates, and then compared this to performance in a task where subjects were implicitly free to choose when to convert from allocentric to egocentric representations. Reach and eye positions were measured using Optotrak and Eyelink Systems, respectively, in fourteen subjects. Our results confirm that egocentric representations degrade over a delay of several seconds, whereas allocentric representations remained relatively stable over the same time scale. Moreover, when subjects were free to choose, they converted allocentric representations into egocentric representations as soon as possible, despite the apparent cost in reach precision in our experimental paradigm. This suggests that humans convert allocentric representations into egocentric commands at the first opportunity, perhaps to optimize motor noise and movement timing in real-world conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cues, context, and long-term memory: the role of the retrosplenial cortex in spatial cognition
Miller, Adam M. P.; Vedder, Lindsey C.; Law, L. Matthew; Smith, David M.
2014-01-01
Spatial navigation requires memory representations of landmarks and other navigation cues. The retrosplenial cortex (RSC) is anatomically positioned between limbic areas important for memory formation, such as the hippocampus (HPC) and the anterior thalamus, and cortical regions along the dorsal stream known to contribute importantly to long-term spatial representation, such as the posterior parietal cortex. Damage to the RSC severely impairs allocentric representations of the environment, including the ability to derive navigational information from landmarks. The specific deficits seen in tests of human and rodent navigation suggest that the RSC supports allocentric representation by processing the stable features of the environment and the spatial relationships among them. In addition to spatial cognition, the RSC plays a key role in contextual and episodic memory. The RSC also contributes importantly to the acquisition and consolidation of long-term spatial and contextual memory through its interactions with the HPC. Within this framework, the RSC plays a dual role as part of the feedforward network providing sensory and mnemonic input to the HPC and as a target of the hippocampal-dependent systems consolidation of long-term memory. PMID:25140141
Selective influence of prior allocentric knowledge on the kinesthetic learning of a path.
Lafon, Matthieu; Vidal, Manuel; Berthoz, Alain
2009-04-01
Spatial cognition studies have described two main cognitive strategies involved in the memorization of traveled paths in human navigation. One of these strategies uses the action-based memory (egocentric) of the traveled route or paths, which involves kinesthetic memory, optic flow, and episodic memory, whereas the other strategy privileges a survey memory of cartographic type (allocentric). Most studies have dealt with these two strategies separately, but none has tried to show the interaction between them in spite of the fact that we commonly use a map to imagine our journey and then proceed using egocentric navigation. An interesting question is therefore: how does prior allocentric knowledge of the environment affect the egocentric, purely kinesthetic navigation processes involved in human navigation? We designed an experiment in which blindfolded subjects had first to walk and memorize a path with kinesthetic cues only. They had previously been shown a map of the path, which was either correct or distorted (consistent shrinking or growing). The latter transformations were studied in order to observe what influence a distorted prior knowledge could have on spatial mechanisms. After having completed the first learning travel along the path, they had to perform several spatial tasks during the testing phase: (1) pointing towards the origin and (2) to specific points encountered along the path, (3) a free locomotor reproduction, and (4) a drawing of the memorized path. The results showed that prior cartographic knowledge influences the paths drawn and the spatial inference capacity, whereas neither locomotor reproduction nor spatial updating was disturbed. Our results strongly support the notion that (1) there are two independent neural bases underlying these mechanisms: a map-like representation allowing allocentric spatial inferences, and a kinesthetic memory of self-motion in space; and (2) a common use of, or a switching between, these two strategies is possible. Nevertheless, allocentric representations can emerge from the experience of kinesthetic cues alone.
When Do Objects Become Landmarks? A VR Study of the Effect of Task Relevance on Spatial Memory
Han, Xue; Byrne, Patrick; Kahana, Michael; Becker, Suzanna
2012-01-01
We investigated how objects come to serve as landmarks in spatial memory, and more specifically how they form part of an allocentric cognitive map. Participants performing a virtual driving task incidentally learned the layout of a virtual town and locations of objects in that town. They were subsequently tested on their spatial and recognition memory for the objects. To assess whether the objects were encoded allocentrically we examined pointing consistency across tested viewpoints. In three experiments, we found that spatial memory for objects at navigationally relevant locations was more consistent across tested viewpoints, particularly when participants had more limited experience of the environment. When participants’ attention was focused on the appearance of objects, the navigational relevance effect was eliminated, whereas when their attention was focused on objects’ locations, this effect was enhanced, supporting the hypothesis that when objects are processed in the service of navigation, rather than merely being viewed as objects, they engage qualitatively distinct attentional systems and are incorporated into an allocentric spatial representation. The results are consistent with evidence from the neuroimaging literature that when objects are relevant to navigation, they not only engage the ventral “object processing stream”, but also the dorsal stream and medial temporal lobe memory system classically associated with allocentric spatial memory. PMID:22586455
Egocentric and allocentric representations in auditory cortex
Brimijoin, W. Owen; Bizley, Jennifer K.
2017-01-01
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796
Remembering the Past and Imagining the Future: A Neural Model of Spatial Memory and Imagery
ERIC Educational Resources Information Center
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2007-01-01
The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics…
Spatial Hyperschematia without Spatial Neglect after Insulo-Thalamic Disconnection
Saj, Arnaud; Wilcke, Juliane C.; Gschwind, Markus; Emond, Héloïse; Assal, Frédéric
2013-01-01
Different spatial representations are not stored as a single multipurpose map in the brain. Right brain-damaged patients can show a distortion, a compression of peripersonal and extrapersonal space. Here we report the case of a patient with a right insulo-thalamic disconnection without spatial neglect. The patient, compared with 10 healthy control subjects, showed a constant and reliable increase of her peripersonal and extrapersonal egocentric space representations - that we named spatial hyperschematia - yet left her allocentric space representations intact. This striking dissociation shows that our interactions with the surrounding world are represented and processed modularly in the human brain, depending on their frame of reference. PMID:24302992
Thaler, Lore; Todd, James T
2009-04-01
Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Allocentric and contra-aligned spatial representations of a town environment in blind people.
Chiesa, Silvia; Schmidt, Susanna; Tinti, Carla; Cornoldi, Cesare
2017-10-01
Evidence concerning the representation of space by blind individuals is still unclear, as sometimes blind people behave like sighted people do, while other times they present difficulties. A better understanding of blind people's difficulties, especially with reference to the strategies used to form the representation of the environment, may help to enhance knowledge of the consequences of the absence of vision. The present study examined the representation of the locations of landmarks of a real town by using pointing tasks that entailed either allocentric points of reference with mental rotations of different degrees, or contra-aligned representations. Results showed that, in general, people met difficulties when they had to point from a different perspective to aligned landmarks or from the original perspective to contra-aligned landmarks, but this difficulty was particularly evident for the blind. The examination of the strategies adopted to perform the tasks showed that only a small group of blind participants used a survey strategy and that this group had a better performance with respect to people who adopted route or verbal strategies. Implications for the comprehension of the consequences on spatial cognition of the absence of visual experience are discussed, focusing in particular on conceivable interventions. Copyright © 2017 Elsevier B.V. All rights reserved.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000
Gravity Influences the Visual Representation of Object Tilt in Parietal Cortex
Angelaki, Dora E.
2014-01-01
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an “earth-vertical” direction. PMID:25339732
Serino, Silvia; Pedroli, Elisa; Tuena, Cosimo; De Leo, Gianluca; Stramba-Badiale, Marco; Goulene, Karine; Mariotti, Noemi G; Riva, Giuseppe
2017-01-01
A growing body of evidence suggests that people with Alzheimer's Disease (AD) show compromised spatial abilities. In addition, there exists from the earliest stages of AD a specific impairment in "mental frame syncing," which is the ability to synchronize an allocentric viewpoint-independent representation (including object-to-object information) with an egocentric one by computing the bearing of each relevant "object" in the environment in relation to the stored heading in space (i.e., information about our viewpoint contained in the allocentric viewpoint-dependent representation). The main objective of this development-of-concept trial was to evaluate the efficacy of a novel VR-based training protocol focused on the enhancement of the "mental frame syncing" of the different spatial representations in subjects with AD. We recruited 20 individuals with AD who were randomly assigned to either "VR-based training" or "Control Group." Moreover, eight cognitively healthy elderly individuals were recruited to participate in the VR-based training in order to have a different comparison group. Based on a neuropsychological assessment, our results indicated a significant improvement in long-term spatial memory after the VR-based training for patients with AD; this means that transference of improvements from the VR-based training to more general aspects of spatial cognition was observed. Interestingly, there was also a significant effect of VR-based training on executive functioning for cognitively healthy elderly individuals. In sum, VR could be considered as an advanced embodied tool suitable for treating spatial recall impairments.
Complexity vs. unity in unilateral spatial neglect.
Rode, G; Fourtassi, M; Pagliari, C; Pisella, L; Rossetti, Y
Unilateral spatial neglect constitutes a heterogeneous syndrome characterized by two main entangled components: a contralesional bias of spatial attention orientation; and impaired building and/or exploration of mental representations of space. These two components are present in different subtypes of unilateral spatial neglect (visual, auditory, somatosensory, motor, allocentric, egocentric, personal, representational and productive manifestations). Detailed anatomical and clinical analyses of these conditions and their underlying disorders show the complexity of spatial cognitive deficits and the difficulty of proposing just one explanation. This complexity is in contrast, however, to the widely acknowledged effectiveness of rehabilitation of the various symptoms and subtypes of unilateral spatial neglect, exemplified in the case of prism adaptation. These common effects are reflections of the unity of the physiotherapeutic mechanisms behind the higher brain functions related to multisensory integration and spatial representations, whereas the paradoxical aspects of unilateral spatial neglect emphasize the need for a greater understanding of spatial cognitive disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
The World Is Not Flat: Can People Reorient Using Slope?
ERIC Educational Resources Information Center
Nardi, Daniele; Newcombe, Nora S.; Shipley, Thomas F.
2011-01-01
Studies of spatial representation generally focus on flat environments and visual input. However, the world is not flat, and slopes are part of most natural environments. In a series of 4 experiments, we examined whether humans can use a slope as a source of allocentric, directional information for reorientation. A target was hidden in a corner of…
Improvement of Allocentric Spatial Memory Resolution in Children from 2 to 4 Years of Age
ERIC Educational Resources Information Center
Lambert, Farfalla Ribordy; Lavenex, Pierre; Lavenex, Pamela Banta
2015-01-01
Allocentric spatial memory, the memory for locations coded in relation to objects comprising our environment, is a fundamental component of episodic memory and is dependent on the integrity of the hippocampal formation in adulthood. Previous research from different laboratories reported that basic allocentric spatial memory abilities are reliably…
Gravity influences the visual representation of object tilt in parietal cortex.
Rosenberg, Ari; Angelaki, Dora E
2014-10-22
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.
Serino, Silvia; Pedroli, Elisa; Tuena, Cosimo; De Leo, Gianluca; Stramba-Badiale, Marco; Goulene, Karine; Mariotti, Noemi G.; Riva, Giuseppe
2017-01-01
A growing body of evidence suggests that people with Alzheimer's Disease (AD) show compromised spatial abilities. In addition, there exists from the earliest stages of AD a specific impairment in “mental frame syncing,” which is the ability to synchronize an allocentric viewpoint-independent representation (including object-to-object information) with an egocentric one by computing the bearing of each relevant “object” in the environment in relation to the stored heading in space (i.e., information about our viewpoint contained in the allocentric viewpoint-dependent representation). The main objective of this development-of-concept trial was to evaluate the efficacy of a novel VR-based training protocol focused on the enhancement of the “mental frame syncing” of the different spatial representations in subjects with AD. We recruited 20 individuals with AD who were randomly assigned to either “VR-based training” or “Control Group.” Moreover, eight cognitively healthy elderly individuals were recruited to participate in the VR-based training in order to have a different comparison group. Based on a neuropsychological assessment, our results indicated a significant improvement in long-term spatial memory after the VR-based training for patients with AD; this means that transference of improvements from the VR-based training to more general aspects of spatial cognition was observed. Interestingly, there was also a significant effect of VR-based training on executive functioning for cognitively healthy elderly individuals. In sum, VR could be considered as an advanced embodied tool suitable for treating spatial recall impairments. PMID:28798682
Enhancing Allocentric Spatial Recall in Pre-schoolers through Navigational Training Programme
Boccia, Maddalena; Rosella, Michela; Vecchione, Francesca; Tanzilli, Antonio; Palermo, Liana; D'Amico, Simonetta; Guariglia, Cecilia; Piccardi, Laura
2017-01-01
Unlike for other abilities, children do not receive systematic spatial orientation training at school, even though navigational training during adulthood improves spatial skills. We investigated whether navigational training programme (NTP) improved spatial orientation skills in pre-schoolers. We administered 12-week NTP to seventeen 4- to 5-year-old children (training group, TG). The TG children and 17 age-matched children (control group, CG) who underwent standard didactics were tested twice before (T0) and after (T1) the NTP using tasks that tap into landmark, route and survey representations. We determined that the TG participants significantly improved their performances in the most demanding navigational task, which is the task that taps into survey representation. This improvement was significantly higher than that observed in the CG, suggesting that NTP fostered the acquisition of survey representation. Such representation is typically achieved by age seven. This finding suggests that NTP improves performance on higher-level navigational tasks in pre-schoolers. PMID:29085278
Social place-cells in the bat hippocampus.
Omer, David B; Maimon, Shir R; Las, Liora; Ulanovsky, Nachum
2018-01-12
Social animals have to know the spatial positions of conspecifics. However, it is unknown how the position of others is represented in the brain. We designed a spatial observational-learning task, in which an observer bat mimicked a demonstrator bat while we recorded hippocampal dorsal-CA1 neurons from the observer bat. A neuronal subpopulation represented the position of the other bat, in allocentric coordinates. About half of these "social place-cells" represented also the observer's own position-that is, were place cells. The representation of the demonstrator bat did not reflect self-movement or trajectory planning by the observer. Some neurons represented also the position of inanimate moving objects; however, their representation differed from the representation of the demonstrator bat. This suggests a role for hippocampal CA1 neurons in social-spatial cognition. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Riva, Giuseppe
2011-01-01
Obesity and eating disorders are usually considered unrelated problems with different causes. However, various studies identify unhealthful weight-control behaviors (fasting, vomiting, or laxative abuse), induced by a negative experience of the body, as the common antecedents of both obesity and eating disorders. But how might negative body image—common to most adolescents, not only to medical patients—be behind the development of obesity and eating disorders? In this paper, I review the “allocentric lock theory” of negative body image as the possible antecedent of both obesity and eating disorders. Evidence from psychology and neuroscience indicates that our bodily experience involves the integration of different sensory inputs within two different reference frames: egocentric (first-person experience) and allocentric (third-person experience). Even though functional relations between these two frames are usually limited, they influence each other during the interaction between long- and short-term memory processes in spatial cognition. If this process is impaired either through exogenous (e.g., stress) or endogenous causes, the egocentric sensory inputs are unable to update the contents of the stored allocentric representation of the body. In other words, these patients are locked in an allocentric (observer view) negative image of their body, which their sensory inputs are no longer able to update even after a demanding diet and a significant weight loss. This article discusses the possible role of virtual reality in addressing this problem within an integrated treatment approach based on the allocentric lock theory. PMID:21527095
Unimanual SNARC Effect: Hand Matters.
Riello, Marianna; Rusconi, Elena
2011-01-01
A structural representation of the hand embedding information about the identity and relative position of fingers is necessary to counting routines. It may also support associations between numbers and allocentric spatial codes that predictably interact with other known numerical spatial representations, such as the mental number line (MNL). In this study, 48 Western participants whose typical counting routine proceeded from thumb-to-little on both hands performed magnitude and parity binary judgments. Response keys were pressed either with the right index and middle fingers or with the left index and middle fingers in separate blocks. 24 participants responded with either hands in prone posture (i.e., palm down) and 24 participants responded with either hands in supine (i.e., palm up) posture. When hands were in prone posture, the counting direction of the left hand conflicted with the direction of the left-right MNL, whereas the counting direction of the right hand was consistent with it. When hands were in supine posture, the opposite was true. If systematic associations existed between relative number magnitude and an allocentric spatial representation of the finger series within each hand, as predicted on the basis of counting habits, interactions would be expected between hand posture and a unimanual version of the spatial-numerical association of response codes (SNARC) effect. Data revealed that with hands in prone posture a unimanual SNARC effect was present for the right hand, and with hands in supine posture a unimanual SNARC effect was present for the left hand. We propose that a posture-invariant body structural representation of the finger series provides a relevant frame of reference, a within-hand directional vector, that is associated to simple number processing. Such frame of reference can significantly interact with stimulus-response correspondence effects, like the SNARC, that have been typically attributed to the mapping of numbers on a left-to-right mental line.
Haptic spatial matching in near peripersonal space.
Kaas, Amanda L; Mier, Hanneke I van
2006-04-01
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.
Riva, Giuseppe
2011-03-01
Obesity and eating disorders are usually considered unrelated problems with different causes. However, various studies identify unhealthful weight-control behaviors (fasting, vomiting, or laxative abuse), induced by a negative experience of the body, as the common antecedents of both obesity and eating disorders. But how might negative body image--common to most adolescents, not only to medical patients--be behind the development of obesity and eating disorders? In this paper, I review the "allocentric lock theory" of negative body image as the possible antecedent of both obesity and eating disorders. Evidence from psychology and neuroscience indicates that our bodily experience involves the integration of different sensory inputs within two different reference frames: egocentric (first-person experience) and allocentric (third-person experience). Even though functional relations between these two frames are usually limited, they influence each other during the interaction between long- and short-term memory processes in spatial cognition. If this process is impaired either through exogenous (e.g., stress) or endogenous causes, the egocentric sensory inputs are unable to update the contents of the stored allocentric representation of the body. In other words, these patients are locked in an allocentric (observer view) negative image of their body, which their sensory inputs are no longer able to update even after a demanding diet and a significant weight loss. This article discusses the possible role of virtual reality in addressing this problem within an integrated treatment approach based on the allocentric lock theory. © 2011 Diabetes Technology Society.
Effect of allocentric landmarks on primate gaze behavior in a cue conflict task.
Li, Jirui; Sajad, Amirsaman; Marino, Robert; Yan, Xiaogang; Sun, Saihong; Wang, Hongying; Crawford, J Douglas
2017-05-01
The relative contributions of egocentric versus allocentric cues on goal-directed behavior have been examined for reaches, but not saccades. Here, we used a cue conflict task to assess the effect of allocentric landmarks on gaze behavior. Two head-unrestrained macaques maintained central fixation while a target flashed in one of eight radial directions, set against a continuously present visual landmark (two horizontal/vertical lines spanning the visual field, intersecting at one of four oblique locations 11° from the target). After a 100-ms delay followed by a 100-ms mask, the landmark was displaced by 8° in one of eight radial directions. After a second delay (300-700 ms), the fixation point extinguished, signaling for a saccade toward the remembered target. When the landmark was stable, saccades showed a significant but small (mean 15%) pull toward the landmark intersection, and endpoint variability was significantly reduced. When the landmark was displaced, gaze endpoints shifted significantly, not toward the landmark, but partially (mean 25%) toward a virtual target displaced like the landmark. The landmark had a larger influence when it was closer to initial fixation, and when it shifted away from the target, especially in saccade direction. These findings suggest that internal representations of gaze targets are weighted between egocentric and allocentric cues, and this weighting is further modulated by specific spatial parameters.
Pointing at targets by children with congenital and transient blindness.
Gaunet, Florence; Ittyerah, Miriam; Rossetti, Yves
2007-04-01
The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.
van Gerven, Dustin J H; Ferguson, Thomas; Skelton, Ronald W
2016-07-01
Stress and stress hormones are known to influence the function of the hippocampus, a brain structure critical for cognitive-map-based, allocentric spatial navigation. The caudate nucleus, a brain structure critical for stimulus-response-based, egocentric navigation, is not as sensitive to stress. Evidence for this comes from rodent studies, which show that acute stress or stress hormones impair allocentric, but not egocentric navigation. However, there have been few studies investigating the effect of acute stress on human spatial navigation, and the results of these have been equivocal. To date, no study has investigated whether acute stress can shift human navigational strategy selection between allocentric and egocentric navigation. The present study investigated this question by exposing participants to an acute psychological stressor (the Paced Auditory Serial Addition Task, PASAT), before testing navigational strategy selection in the Dual-Strategy Maze, a modified virtual Morris water maze. In the Dual-Strategy maze, participants can chose to navigate using a constellation of extra-maze cues (allocentrically) or using a single cue proximal to the goal platform (egocentrically). Surprisingly, PASAT stress biased participants to solve the maze allocentrically significantly more, rather than less, often. These findings have implications for understanding the effects of acute stress on cognitive function in general, and the function of the hippocampus in particular. Copyright © 2016 Elsevier Inc. All rights reserved.
Human place and response learning: navigation strategy selection, pupil size and gaze behavior.
de Condappa, Olivier; Wiener, Jan M
2016-01-01
In this study, we examined the cognitive processes and ocular behavior associated with on-going navigation strategy choice using a route learning paradigm that distinguishes between three different wayfinding strategies: an allocentric place strategy, and the egocentric associative cue and beacon response strategies. Participants approached intersections of a known route from a variety of directions, and were asked to indicate the direction in which the original route continued. Their responses in a subset of these test trials allowed the assessment of strategy choice over the course of six experimental blocks. The behavioral data revealed an initial maladaptive bias for a beacon response strategy, with shifts in favor of the optimal configuration place strategy occurring over the course of the experiment. Response time analysis suggests that the configuration strategy relied on spatial transformations applied to a viewpoint-dependent spatial representation, rather than direct access to an allocentric representation. Furthermore, pupillary measures reflected the employment of place and response strategies throughout the experiment, with increasing use of the more cognitively demanding configuration strategy associated with increases in pupil dilation. During test trials in which known intersections were approached from different directions, visual attention was directed to the landmark encoded during learning as well as the intended movement direction. Interestingly, the encoded landmark did not differ between the three navigation strategies, which is discussed in the context of initial strategy choice and the parallel acquisition of place and response knowledge.
Evidence from Visuomotor Adaptation for Two Partially Independent Visuomotor Systems
ERIC Educational Resources Information Center
Thaler, Lore; Todd, James T.
2010-01-01
Visual information can specify spatial layout with respect to the observer (egocentric) or with respect to an external frame of reference (allocentric). People can use both of these types of visual spatial information to guide their hands. The question arises if movements based on egocentric and movements based on allocentric visual information…
Sparse orthogonal population representation of spatial context in the retrosplenial cortex.
Mao, Dun; Kandler, Steffen; McNaughton, Bruce L; Bonin, Vincent
2017-08-15
Sparse orthogonal coding is a key feature of hippocampal neural activity, which is believed to increase episodic memory capacity and to assist in navigation. Some retrosplenial cortex (RSC) neurons convey distributed spatial and navigational signals, but place-field representations such as observed in the hippocampus have not been reported. Combining cellular Ca 2+ imaging in RSC of mice with a head-fixed locomotion assay, we identified a population of RSC neurons, located predominantly in superficial layers, whose ensemble activity closely resembles that of hippocampal CA1 place cells during the same task. Like CA1 place cells, these RSC neurons fire in sequences during movement, and show narrowly tuned firing fields that form a sparse, orthogonal code correlated with location. RSC 'place' cell activity is robust to environmental manipulations, showing partial remapping similar to that observed in CA1. This population code for spatial context may assist the RSC in its role in memory and/or navigation.Neurons in the retrosplenial cortex (RSC) encode spatial and navigational signals. Here the authors use calcium imaging to show that, similar to the hippocampus, RSC neurons also encode place cell-like activity in a sparse orthogonal representation, partially anchored to the allocentric cues on the linear track.
Dzieciol, Anna M.; Gadian, David G.; Jentschke, Sebastian; Doeller, Christian F.; Burgess, Neil; Mishkin, Mortimer
2015-01-01
The extent to which navigational spatial memory depends on hippocampal integrity in humans is not well documented. We investigated allocentric spatial recall using a virtual environment in a group of patients with severe hippocampal damage (SHD), a group of patients with “moderate” hippocampal damage (MHD), and a normal control group. Through four learning blocks with feedback, participants learned the target locations of four different objects in a circular arena. Distal cues were present throughout the experiment to provide orientation. A circular boundary as well as an intra-arena landmark provided spatial reference frames. During a subsequent test phase, recall of all four objects was tested with only the boundary or the landmark being present. Patients with SHD were impaired in both phases of this task. Across groups, performance on both types of spatial recall was highly correlated with memory quotient (MQ), but not with intelligence quotient (IQ), age, or sex. However, both measures of spatial recall separated experimental groups beyond what would be expected based on MQ, a widely used measure of general memory function. Boundary-based and landmark-based spatial recall were both strongly related to bilateral hippocampal volumes, but not to volumes of the thalamus, putamen, pallidum, nucleus accumbens, or caudate nucleus. The results show that boundary-based and landmark-based allocentric spatial recall are similarly impaired in patients with SHD, that both types of recall are impaired beyond that predicted by MQ, and that recall deficits are best explained by a reduction in bilateral hippocampal volumes. SIGNIFICANCE STATEMENT In humans, bilateral hippocampal atrophy can lead to profound impairments in episodic memory. Across species, perhaps the most well-established contribution of the hippocampus to memory is not to episodic memory generally but to allocentric spatial memory. However, the extent to which navigational spatial memory depends on hippocampal integrity in humans is not well documented. We investigated spatial recall using a virtual environment in two groups of patients with hippocampal damage (moderate/severe) and a normal control group. The results showed that patients with severe hippocampal damage are impaired in learning and recalling allocentric spatial information. Furthermore, hippocampal volume reduction impaired allocentric navigation beyond what can be predicted by memory quotient as a widely used measure of general memory function. PMID:26490854
Guderian, Sebastian; Dzieciol, Anna M; Gadian, David G; Jentschke, Sebastian; Doeller, Christian F; Burgess, Neil; Mishkin, Mortimer; Vargha-Khadem, Faraneh
2015-10-21
The extent to which navigational spatial memory depends on hippocampal integrity in humans is not well documented. We investigated allocentric spatial recall using a virtual environment in a group of patients with severe hippocampal damage (SHD), a group of patients with "moderate" hippocampal damage (MHD), and a normal control group. Through four learning blocks with feedback, participants learned the target locations of four different objects in a circular arena. Distal cues were present throughout the experiment to provide orientation. A circular boundary as well as an intra-arena landmark provided spatial reference frames. During a subsequent test phase, recall of all four objects was tested with only the boundary or the landmark being present. Patients with SHD were impaired in both phases of this task. Across groups, performance on both types of spatial recall was highly correlated with memory quotient (MQ), but not with intelligence quotient (IQ), age, or sex. However, both measures of spatial recall separated experimental groups beyond what would be expected based on MQ, a widely used measure of general memory function. Boundary-based and landmark-based spatial recall were both strongly related to bilateral hippocampal volumes, but not to volumes of the thalamus, putamen, pallidum, nucleus accumbens, or caudate nucleus. The results show that boundary-based and landmark-based allocentric spatial recall are similarly impaired in patients with SHD, that both types of recall are impaired beyond that predicted by MQ, and that recall deficits are best explained by a reduction in bilateral hippocampal volumes. In humans, bilateral hippocampal atrophy can lead to profound impairments in episodic memory. Across species, perhaps the most well-established contribution of the hippocampus to memory is not to episodic memory generally but to allocentric spatial memory. However, the extent to which navigational spatial memory depends on hippocampal integrity in humans is not well documented. We investigated spatial recall using a virtual environment in two groups of patients with hippocampal damage (moderate/severe) and a normal control group. The results showed that patients with severe hippocampal damage are impaired in learning and recalling allocentric spatial information. Furthermore, hippocampal volume reduction impaired allocentric navigation beyond what can be predicted by memory quotient as a widely used measure of general memory function. Copyright © 2015 Guderian et al.
Spatial navigation in young versus older adults
Gazova, Ivana; Laczó, Jan; Rubinova, Eva; Mokrisova, Ivana; Hyncicova, Eva; Andel, Ross; Vyhnalek, Martin; Sheardova, Katerina; Coulson, Elizabeth J.; Hort, Jakub
2013-01-01
Older age is associated with changes in the brain, including the medial temporal lobe, which may result in mild spatial navigation deficits, especially in allocentric navigation. The aim of the study was to characterize the profile of real-space allocentric (world-centered, hippocampus-dependent) and egocentric (body-centered, parietal lobe dependent) navigation and learning in young vs. older adults, and to assess a possible influence of gender. We recruited healthy participants without cognitive deficits on standard neuropsychological testing, white matter lesions or pronounced hippocampal atrophy: 24 young participants (18–26 years old) and 44 older participants stratified as participants 60–70 years old (n = 24) and participants 71–84 years old (n = 20). All underwent spatial navigation testing in the real-space human analog of the Morris Water Maze, which has the advantage of assessing separately allocentric and egocentric navigation and learning. Of the eight consecutive trials, trials 2–8 were used to reduce bias by a rebound effect (more dramatic changes in performance between trials 1 and 2 relative to subsequent trials). The participants who were 71–84 years old (p < 0.001), but not those 60–70 years old, showed deficits in allocentric navigation compared to the young participants. There were no differences in egocentric navigation. All three groups showed spatial learning effect (p’ s ≤ 0.01). There were no gender differences in spatial navigation and learning. Linear regression limited to older participants showed linear (β = 0.30, p = 0.045) and quadratic (β = 0.30, p = 0.046) effect of age on allocentric navigation. There was no effect of age on egocentric navigation. These results demonstrate that navigation deficits in older age may be limited to allocentric navigation, whereas egocentric navigation and learning may remain preserved. This specific pattern of spatial navigation impairment may help differentiate normal aging from prodromal Alzheimer’s disease. PMID:24391585
Rogers, Jake; Churilov, Leonid; Hannan, Anthony J; Renoir, Thibault
2017-03-01
Using a Matlab classification algorithm, we demonstrate that a highly salient distal cue array is required for significantly increased likelihoods of spatial search strategy selection during Morris water maze spatial learning. We hypothesized that increased spatial search strategy selection during spatial learning would be the key measure demonstrating the formation of an allocentric map to the escape location. Spatial memory, as indicated by quadrant preference for the area of the pool formally containing the hidden platform, was assessed as the main measure that this allocentric map had formed during spatial learning. Our C57BL/6J wild-type (WT) mice exhibit quadrant preference in the highly salient cue paradigm but not the low, corresponding with a 120% increase in the odds of a spatial search strategy selection during learning. In contrast, quadrant preference remains absent in serotonin 1A receptor (5-HT 1A R) knockout (KO) mice, who exhibit impaired search strategy selection during spatial learning. Additionally, we also aimed to assess the impact of the quality of the distal cue array on the spatial learning curves of both latency to platform and path length using mixed-effect regression models and found no significant associations or interactions. In contrast, we demonstrated that the spatial learning curve for search strategy selection was absent during training in the low saliency paradigm. Therefore, we propose that allocentric search strategy selection during spatial learning is the learning parameter in mice that robustly indicates the formation of a cognitive map for the escape goal location. These results also suggest that both latency to platform and path length spatial learning curves do not discriminate between allocentric and egocentric spatial learning and do not reliably predict spatial memory formation. We also show that spatial memory, as indicated by the absolute time in the quadrant formerly containing the hidden platform alone (without reference to the other areas of the pool), was not sensitive to cue saliency or impaired in 5-HT 1A R KO mice. Importantly, in the absence of a search strategy analysis, this suggests that to establish that the Morris water maze has worked (i.e. control mice have formed an allocentric map to the escape goal location), a measure of quadrant preference needs to be reported to establish spatial memory formation. This has implications for studies that claim hippocampal functioning is impaired using latency to platform or path length differences within the existing Morris water maze literature. Copyright © 2016 Elsevier Inc. All rights reserved.
Manzone, Joseph; Heath, Matthew
2018-04-01
Reaching to a veridical target permits an egocentric spatial code (i.e., absolute limb and target position) to effect fast and effective online trajectory corrections supported via the visuomotor networks of the dorsal visual pathway. In contrast, a response entailing decoupled spatial relations between stimulus and response is thought to be primarily mediated via an allocentric code (i.e., the position of a target relative to another external cue) laid down by the visuoperceptual networks of the ventral visual pathway. Because the ventral stream renders a temporally durable percept, it is thought that an allocentric code does not support a primarily online mode of control, but instead supports a mode wherein a response is evoked largely in advance of movement onset via central planning mechanisms (i.e., offline control). Here, we examined whether reaches defined via ego- and allocentric visual coordinates are supported via distinct control modes (i.e., online versus offline). Participants performed target-directed and allocentric reaches in limb visible and limb-occluded conditions. Notably, in the allocentric task, participants reached to a location that matched the position of a target stimulus relative to a reference stimulus, and to examine online trajectory amendments, we computed the proportion of variance explained (i.e., R 2 values) by the spatial position of the limb at 75% of movement time relative to a response's ultimate movement endpoint. Target-directed trials performed with limb vision showed more online corrections and greater endpoint precision than their limb-occluded counterparts, which in turn were associated with performance metrics comparable to allocentric trials performed with and without limb vision. Accordingly, we propose that the absence of ego-motion cues (i.e., limb vision) and/or the specification of a response via an allocentric code renders motor output served via the 'slow' visuoperceptual networks of the ventral visual pathway.
Lateralization of Egocentric and Allocentric Spatial Processing after Parietal Brain Lesions
ERIC Educational Resources Information Center
Iachini, Tina; Ruggiero, Gennaro; Conson, Massimiliano; Trojano, Luigi
2009-01-01
The purpose of this paper was to verify whether left and right parietal brain lesions may selectively impair egocentric and allocentric processing of spatial information in near/far spaces. Two Right-Brain-Damaged (RBD), 2 Left-Brain-Damaged (LBD) patients (not affected by neglect or language disturbances) and eight normal controls were submitted…
A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region
MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard
2014-01-01
The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015
Know thyself: behavioral evidence for a structural representation of the human body.
Rusconi, Elena; Gonzaga, Mirandola; Adriani, Michela; Braun, Christoph; Haggard, Patrick
2009-01-01
Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain. We developed an inter-manual version of the classical "in-between" finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture. Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation.
Know Thyself: Behavioral Evidence for a Structural Representation of the Human Body
Rusconi, Elena; Gonzaga, Mirandola; Adriani, Michela; Braun, Christoph; Haggard, Patrick
2009-01-01
Background Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain. Methods and Findings We developed an inter-manual version of the classical “in-between” finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture. Conclusions Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation. PMID:19412538
Development of Allocentric Spatial Memory Abilities in Children from 18 months to 5 Years of Age
ERIC Educational Resources Information Center
Ribordy, Farfalla; Jabes, Adeline; Lavenex, Pamela Banta; Lavenex, Pierre
2013-01-01
Episodic memories for autobiographical events that happen in unique spatiotemporal contexts are central to defining who we are. Yet, before 2 years of age, children are unable to form or store episodic memories for recall later in life, a phenomenon known as infantile amnesia. Here, we studied the development of allocentric spatial memory, a…
Risk factors for spatial memory impairment in patients with temporal lobe epilepsy.
Amlerova, Jana; Laczo, Jan; Vlcek, Kamil; Javurkova, Alena; Andel, Ross; Marusic, Petr
2013-01-01
At present, the risk factors for world-centered (allocentric) navigation impairment in patients with temporal lobe epilepsy (TLE) are not known. There is some evidence on the importance of the right hippocampus but other clinical features have not been investigated yet. In this study, we used an experimental human equivalent to the Morris water maze to examine spatial navigation performance in patients with drug-refractory unilateral TLE. We included 47 left-hemisphere speech dominant patients (25 right sided; 22 left sided). The aim of our study was to identify clinical and demographic characteristics of TLE patients who performed poorly in allocentric spatial memory tests. Our results demonstrate that poor spatial navigation is significantly associated with younger age at epilepsy onset, longer disease duration, and lower intelligence level. Allocentric navigation in TLE patients was impaired irrespective of epilepsy lateralization. Good and poor navigators did not differ in their age, gender, or preoperative/postoperative status. This study provides evidence on risk factors that increase the likelihood of allocentric navigation impairment in TLE patients. The results indicate that not only temporal lobe dysfunction itself but also low general cognitive abilities may contribute to the navigation impairment. Copyright © 2012 Elsevier Inc. All rights reserved.
Hardt, Oliver; Nadel, Lynn
2009-01-01
Cognitive map theory suggested that exploring an environment and attending to a stimulus should lead to its integration into an allocentric environmental representation. We here report that directed attention in the form of exploration serves to gather information needed to determine an optimal spatial strategy, given task demands and characteristics of the environment. Attended environmental features may integrate into spatial representations if they meet the requirements of the optimal spatial strategy: when learning involves a cognitive mapping strategy, cues with high codability (e.g., concrete objects) will be incorporated into a map, but cues with low codability (e.g., abstract paintings) will not. However, instructions encouraging map learning can lead to the incorporation of cues with low codability. On the other hand, if spatial learning is not map-based, abstract cues can and will be used to encode locations. Since exploration appears to determine what strategy to apply and whether or not to encode a cue, recognition memory for environmental features is independent of whether or not a cue is part of a spatial representation. In fact, when abstract cues were used in a way that was not map-based, or when they were not used for spatial navigation at all, they were nevertheless recognized as familiar. Thus, the relation between exploratory activity on the one hand and spatial strategy and memory on the other appears more complex than initially suggested by cognitive map theory.
Negen, James; Roome, Hannah E; Keenaghan, Samantha; Nardini, Marko
2018-06-01
Spatial memory is an important aspect of adaptive behavior and experience, providing both content and context to the perceptions and memories that we form in everyday life. Young children's abilities in this realm shift from mainly egocentric (self-based) to include allocentric (world-based) codings at around 4 years of age. However, information about the cognitive mechanisms underlying acquisition of these new abilities is still lacking. We examined allocentric spatial recall in 4.5- to 8.5-year-olds, looking for continuity with navigation as previously studied in 2- to 4-year-olds and other species. We specifically predicted an advantage for three-dimensional landmarks over two-dimensional ones and for recalling targets "in the middle" versus elsewhere. However, we did not find compelling evidence for either of these effects, and indeed some analyses even support the opposite of each of these conclusions. There were also no significant interactions with age. These findings highlight the incompleteness of our overall theories of the development of spatial cognition in general and allocentric spatial recall in particular. They also suggest that allocentric spatial recall involves processes that have separate behavioral characteristics from other cognitive systems involved in navigation earlier in life and in other species. Copyright © 2018 Elsevier Inc. All rights reserved.
Out of my real body: cognitive neuroscience meets eating disorders
Riva, Giuseppe
2014-01-01
Clinical psychology is starting to explain eating disorders (ED) as the outcome of the interaction among cognitive, socio-emotional and interpersonal elements. In particular two influential models—the revised cognitive-interpersonal maintenance model and the transdiagnostic cognitive behavioral theory—identified possible key predisposing and maintaining factors. These models, even if very influential and able to provide clear suggestions for therapy, still are not able to provide answers to several critical questions: why do not all the individuals with obsessive compulsive features, anxious avoidance or with a dysfunctional scheme for self-evaluation develop an ED? What is the role of the body experience in the etiology of these disorders? In this paper we suggest that the path to a meaningful answer requires the integration of these models with the recent outcomes of cognitive neuroscience. First, our bodily representations are not just a way to map an external space but the main tool we use to generate meaning, organize our experience, and shape our social identity. In particular, we will argue that our bodily experience evolves over time by integrating six different representations of the body characterized by specific pathologies—body schema (phantom limb), spatial body (unilateral hemi-neglect), active body (alien hand syndrome), personal body (autoscopic phenomena), objectified body (xenomelia) and body image (body dysmorphia). Second, these representations include either schematic (allocentric) or perceptual (egocentric) contents that interact within the working memory of the individual through the alignment between the retrieved contents from long-term memory and the ongoing egocentric contents from perception. In this view EDs may be the outcome of an impairment in the ability of updating a negative body representation stored in autobiographical memory (allocentric) with real-time sensorimotor and proprioceptive data (egocentric). PMID:24834042
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
Heiz, J; Majerus, S; Barisnikov, K
2017-09-28
This study examined the spontaneous use of allocentric and egocentric frames of reference and their flexible use as a function of instructions. The computerized spatial reference task created by Heiz and Barisnikov (2015) was used. Participants had to choose a frame of reference according to three types of instructions: spontaneous, allocentric and egocentric. The performances of 16 Williams Syndrome participants between 10 and 41 years were compared to those of two control groups (chronological age and non-verbal intellectual ability). The majority of Williams Syndrome participants did not show a preference for a particular frame of reference. When explicitly inviting participants to use an allocentric frame of reference, all three groups showed an increased use of the allocentric frame of reference. At the same time, an important heterogeneity of type of frame of reference used by Williams Syndrome participants was observed. Results demonstrate that despite difficulties in the spontaneous use of allocentric and egocentric frames of reference, some Williams Syndrome participants show flexibility in the use of an allocentric frame of reference when an explicit instruction is provided. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Vann, Seralynne D; Aggleton, John P
2002-02-01
Despite the connections of the retrosplenial cortex strongly suggesting a role in spatial memory, the lesion data to date have been equivocal. Whether subjects are impaired after retrosplenial lesions seems to depend on whether the lesions were aspirative or excitotoxic, with the latter failing to produce an impairment. A shortcoming of previous excitotoxic lesion studies is that they spared the most caudal part of the retrosplenial cortex. The present study thus used rats with extensive neurotoxic lesions of the retrosplenial cortex that encompassed the entire rostrocaudal extent of this region. These rats were consistently impaired on several tests that tax allocentric memory. In contrast, they were unimpaired on an egocentric discrimination task. Although the lesions did not appear to affect object recognition, clear deficits were found for an object-in-place discrimination. The present study not only demonstrates a role for the retrosplenial cortex in allocentric spatial memory, but also explains why previous excitotoxic lesions have failed to detect any deficits.
Effects of age on navigation strategy.
Rodgers, M Kirk; Sindone, Joseph A; Moffat, Scott D
2012-01-01
Age differences in navigation strategies have been demonstrated in animals, with aged animals more likely to prefer an egocentric (route) strategy and younger animals more likely to prefer an allocentric (place) strategy. Using a novel virtual Y-maze strategy assessment (vYSA), the present study demonstrated substantial age differences in strategy preference in humans. Older adults overwhelmingly preferred an egocentric strategy, while younger adults were equally distributed between egocentric and allocentric preference. A preference for allocentric strategy on the Y-maze strategy assessment was found to benefit performance on an independent assessment (virtual Morris water task) only in younger adults. These results establish baseline age differences in spatial strategies and suggest this may impact performance on other spatial navigation assessments. The results are interpreted within the framework of age differences in hippocampal structure and function. Copyright © 2012 Elsevier Inc. All rights reserved.
Ganesh, Shanti; van Schie, Hein T; Cross, Emily S; de Lange, Floris P; Wigboldus, Daniël H J
2015-08-01
Mental imagery of one's body moving through space is important for imagining changing visuospatial perspectives, as well as for determining how we might appear to other people. Previous neuroimaging research has implicated the temporoparietal junction (TPJ) in this process. It is unclear, however, how neural activity in the TPJ relates to the rotation perspectives from which mental spatial transformation (MST) of one's own body can take place, i.e. from an egocentric or an allocentric perspective. It is also unclear whether TPJ involvement in MST is self-specific or whether the TPJ may also be involved in MST of other human bodies. The aim of the current study was to disentangle neural processes involved in egocentric versus allocentric MSTs of human bodies representing self and other. We measured functional brain activity of healthy participants while they performed egocentric and allocentric MSTs in relation to whole-body photographs of themselves and a same-sex stranger. Findings indicated higher blood oxygen level-dependent (BOLD) response in bilateral TPJ during egocentric versus allocentric MST. Moreover, BOLD response in the TPJ during egocentric MST correlated positively with self-report scores indicating how awkward participants felt while viewing whole-body photos of themselves. These findings considerably advance our understanding of TPJ involvement in MST and its interplay with self-awareness. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Bullens, Jessie; Igloi, Kinga; Berthoz, Alain; Postma, Albert; Rondi-Reig, Laure
2010-01-01
Navigation in a complex environment can rely on the use of different spatial strategies. We have focused on the employment of "allocentric" (i.e., encoding interrelationships among environmental cues, movements, and the location of the goal) and "sequential egocentric" (i.e., sequences of body turns associated with specific choice points)…
Katus, Tobias; Müller, Matthias M; Eimer, Martin
2015-01-28
To adaptively guide ongoing behavior, representations in working memory (WM) often have to be modified in line with changing task demands. We used event-related potentials (ERPs) to demonstrate that tactile WM representations are stored in modality-specific cortical regions, that the goal-directed modulation of these representations is mediated through hemispheric-specific activation of somatosensory areas, and that the rehearsal of somatotopic coordinates in memory is accomplished by modality-specific spatial attention mechanisms. Participants encoded two tactile sample stimuli presented simultaneously to the left and right hands, before visual retro-cues indicated which of these stimuli had to be retained to be matched with a subsequent test stimulus on the same hand. Retro-cues triggered a sustained tactile contralateral delay activity component with a scalp topography over somatosensory cortex contralateral to the cued hand. Early somatosensory ERP components to task-irrelevant probe stimuli (that were presented after the retro-cues) and to subsequent test stimuli were enhanced when these stimuli appeared at the currently memorized location relative to other locations on the cued hand, demonstrating that a precise focus of spatial attention was established during the selective maintenance of tactile events in WM. These effects were observed regardless of whether participants performed the matching task with uncrossed or crossed hands, indicating that WM representations in this task were based on somatotopic rather than allocentric spatial coordinates. In conclusion, spatial rehearsal in tactile WM operates within somatotopically organized sensory brain areas that have been recruited for information storage. Copyright © 2015 Katus et al.
The Role of Emotional Landmarks on Topographical Memory.
Palmiero, Massimiliano; Piccardi, Laura
2017-01-01
The investigation of the role of emotional landmarks on human navigation has been almost totally neglected in psychological research. Therefore, the extent to which positive and negative emotional landmarks affect topographical memory as compared to neutral emotional landmark was explored. Positive, negative and neutral affect-laden images were selected as landmarks from the International Affective Picture System (IAPS) Inventory. The Walking Corsi test (WalCT) was used in order to test the landmark-based topographical memory. Participants were instructed to learn and retain an eight-square path encompassing positive, negative or neutral emotional landmarks. Both egocentric and allocentric frames of references were considered. Egocentric representation encompasses the object's relation to the self and it is generated from sensory data. Allocentric representation expresses a location with respect to an external frame regardless of the self and it is the basis for long-term storage of complex layouts. In particular, three measures of egocentric and allocentric topographical memory were taken into account: (1) the ability to learn the path; (2) the ability to recall by walking the path five minutes later; (3) the ability to reproduce the path on the outline of the WalCT. Results showed that both positive and negative emotional landmarks equally enhanced the learning of the path as compared to neutral emotional landmarks. In addition, positive emotional landmarks improved the reproduction of the path on the map as compared to negative and neutral emotional landmarks. These results generally show that emotional landmarks enhance egocentric-based topographical memory, whereas positive emotional landmarks seem to be more effective for allocentric-based topographical memory.
The Role of Emotional Landmarks on Topographical Memory
Palmiero, Massimiliano; Piccardi, Laura
2017-01-01
The investigation of the role of emotional landmarks on human navigation has been almost totally neglected in psychological research. Therefore, the extent to which positive and negative emotional landmarks affect topographical memory as compared to neutral emotional landmark was explored. Positive, negative and neutral affect-laden images were selected as landmarks from the International Affective Picture System (IAPS) Inventory. The Walking Corsi test (WalCT) was used in order to test the landmark-based topographical memory. Participants were instructed to learn and retain an eight-square path encompassing positive, negative or neutral emotional landmarks. Both egocentric and allocentric frames of references were considered. Egocentric representation encompasses the object’s relation to the self and it is generated from sensory data. Allocentric representation expresses a location with respect to an external frame regardless of the self and it is the basis for long-term storage of complex layouts. In particular, three measures of egocentric and allocentric topographical memory were taken into account: (1) the ability to learn the path; (2) the ability to recall by walking the path five minutes later; (3) the ability to reproduce the path on the outline of the WalCT. Results showed that both positive and negative emotional landmarks equally enhanced the learning of the path as compared to neutral emotional landmarks. In addition, positive emotional landmarks improved the reproduction of the path on the map as compared to negative and neutral emotional landmarks. These results generally show that emotional landmarks enhance egocentric-based topographical memory, whereas positive emotional landmarks seem to be more effective for allocentric-based topographical memory. PMID:28539910
Circadian time-place (or time-route) learning in rats with hippocampal lesions.
Cole, Emily; Mistlberger, Ralph E; Merza, Devon; Trigiani, Lianne J; Madularu, Dan; Simundic, Amanda; Mumby, Dave G
2016-12-01
Circadian time-place learning (TPL) is the ability to remember both the place and biological time of day that a significant event occurred (e.g., food availability). This ability requires that a circadian clock provide phase information (a time tag) to cognitive systems involved in linking representations of an event with spatial reference memory. To date, it is unclear which neuronal substrates are critical in this process, but one candidate structure is the hippocampus (HPC). The HPC is essential for normal performance on tasks that require allocentric spatial memory and exhibits circadian rhythms of gene expression that are sensitive to meal timing. Using a novel TPL training procedure and enriched, multidimensional environment, we trained rats to locate a food reward that varied between two locations relative to time of day. After rats acquired the task, they received either HPC or SHAM lesions and were re-tested. Rats with HPC lesions were initially impaired on the task relative to SHAM rats, but re-attained high scores with continued testing. Probe tests revealed that the rats were not using an alternation strategy or relying on light-dark transitions to locate the food reward. We hypothesize that transient disruption and recovery reflect a switch from HPC-dependent allocentric navigation (learning places) to dorsal striatum-dependent egocentric spatial navigation (learning routes to a location). Whatever the navigation strategy, these results demonstrate that the HPC is not required for rats to find food in different locations using circadian phase as a discriminative cue. Copyright © 2016 Elsevier Inc. All rights reserved.
Shapero, Joshua A
2017-07-01
Previous studies have shown that language contributes to humans' ability to orient using landmarks and shapes their use of frames of reference (FoRs) for memory. However, the role of environmental experience in shaping spatial cognition has not been investigated. This study addresses such a possibility by examining the use of FoRs in a nonverbal spatial memory task among residents of an Andean community in Peru. Participants consisted of 97 individuals from Ancash Quechua-speaking households (8-77 years of age) who spoke Quechua and/or Spanish and varied considerably with respect to the extent of their experience in the surrounding landscape. The results demonstrated that environmental experience was the only factor significantly related to the preference for allocentric FoRs. The study thus shows that environmental experience can play a role alongside language in shaping habits of spatial representation, and it suggests a new direction of inquiry into the relationships among language, thought, and experience. Copyright © 2016 Cognitive Science Society, Inc.
Spatial transformation abilities and their relation to later mathematics performance.
Frick, Andrea
2018-04-10
Using a longitudinal approach, this study investigated the relational structure of different spatial transformation skills at kindergarten age, and how these spatial skills relate to children's later mathematics performance. Children were tested at three time points, in kindergarten, first grade, and second grade (N = 119). Exploratory factor analyses revealed two subcomponents of spatial transformation skills: one representing egocentric transformations (mental rotation and spatial scaling), and one representing allocentric transformations (e.g., cross-sectioning, perspective taking). Structural equation modeling suggested that egocentric transformation skills showed their strongest relation to the part of the mathematics test tapping arithmetic operations, whereas allocentric transformations were strongly related to Numeric-Logical and Spatial Functions as well as geometry. The present findings point to a tight connection between early mental transformation skills, particularly the ones requiring a high level of spatial flexibility and a strong sense for spatial magnitudes, and children's mathematics performance at the beginning of their school career.
Assessing Spatial Learning and Memory in Rodents
Vorhees, Charles V.; Williams, Michael T.
2014-01-01
Maneuvering safely through the environment is central to survival of almost all species. The ability to do this depends on learning and remembering locations. This capacity is encoded in the brain by two systems: one using cues outside the organism (distal cues), allocentric navigation, and one using self-movement, internal cues and nearby proximal cues, egocentric navigation. Allocentric navigation involves the hippocampus, entorhinal cortex, and surrounding structures; in humans this system encodes allocentric, semantic, and episodic memory. This form of memory is assessed in laboratory animals in many ways, but the dominant form of assessment is the Morris water maze (MWM). Egocentric navigation involves the dorsal striatum and connected structures; in humans this system encodes routes and integrated paths and, when overlearned, becomes procedural memory. In this article, several allocentric assessment methods for rodents are reviewed and compared with the MWM. MWM advantages (little training required, no food deprivation, ease of testing, rapid and reliable learning, insensitivity to differences in body weight and appetite, absence of nonperformers, control methods for proximal cue learning, and performance effects) and disadvantages (concern about stress, perhaps not as sensitive for working memory) are discussed. Evidence-based design improvements and testing methods are reviewed for both rats and mice. Experimental factors that apply generally to spatial navigation and to MWM specifically are considered. It is concluded that, on balance, the MWM has more advantages than disadvantages and compares favorably with other allocentric navigation tasks. PMID:25225309
Thaler, Lore; Goodale, Melvyn A.
2011-01-01
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474
A Computational Model of Spatial Development
NASA Astrophysics Data System (ADS)
Hiraki, Kazuo; Sashima, Akio; Phillips, Steven
Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.
Rubio, S; Begega, A; Méndez, M; Méndez-López, M; Arias, J L
2012-10-25
The involvement of different brain regions in place- and response-learning was examined using a water cross-maze. Rats were trained to find the goal from the initial arm by turning left at the choice point (egocentric strategy) or by using environmental cues (allocentric strategy). Although different strategies were required, the same maze and learning conditions were used. Using cytochrome oxidase histochemistry as a marker of cellular activity, the function of the 13 diverse cortical and subcortical regions was assessed in rats performing these two tasks. Our results show that allocentric learning depends on the recruitment of a large functional network, which includes the hippocampal CA3, dentate gyrus, medial mammillary nucleus and supramammillary nucleus. Along with the striatum, these last three structures are also related to egocentric spatial learning. The present study provides evidence for the contribution of these regions to spatial navigation and supports a possible functional interaction between the two memory systems, as their structural convergence may facilitate functional cooperation in the behaviours guided by more than one strategy. In summary, it can be argued that spatial learning is based on dynamic functional systems in which the interaction of brain regions is modulated by task requirements. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Possin, Katherine L; Kim, Hosung; Geschwind, Michael D; Moskowitz, Tacie; Johnson, Erica T; Sha, Sharon J; Apple, Alexandra; Xu, Duan; Miller, Bruce L; Finkbeiner, Steven; Hess, Christopher P; Kramer, Joel H
2017-07-01
Our brains represent spatial information in egocentric (self-based) or allocentric (landmark-based) coordinates. Rodent studies have demonstrated a critical role for the caudate in egocentric navigation and the hippocampus in allocentric navigation. We administered tests of egocentric and allocentric working memory to individuals with premotor Huntington's disease (pmHD), which is associated with early caudate nucleus atrophy, and controls. Each test had 80 trials during which subjects were asked to remember 2 locations over 1-sec delays. The only difference between these otherwise identical tests was that locations could only be coded in self-based or landmark-based coordinates. We applied a multiatlas-based segmentation algorithm and computed point-wise Jacobian determinants to measure regional variations in caudate and hippocampal volumes from 3T MRI. As predicted, the pmHD patients were significantly more impaired on egocentric working memory. Only egocentric accuracy correlated with caudate volumes, specifically the dorsolateral caudate head, right more than left, a region that receives dense efferents from dorsolateral prefrontal cortex. In contrast, only allocentric accuracy correlated with hippocampal volumes, specifically intermediate and posterior regions that connect strongly with parahippocampal and posterior parietal cortices. These results indicate that the distinction between egocentric and allocentric navigation applies to working memory. The dorsolateral caudate is important for egocentric working memory, which can explain the disproportionate impairment in pmHD. Allocentric working memory, in contrast, relies on the hippocampus and is relatively spared in pmHD. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kerbler, Georg M.; Nedelska, Zuzana; Fripp, Jurgen; Laczó, Jan; Vyhnalek, Martin; Lisý, Jiří; Hamlin, Adam S.; Rose, Stephen; Hort, Jakub; Coulson, Elizabeth J.
2015-01-01
The basal forebrain degenerates in Alzheimer’s disease (AD) and this process is believed to contribute to the cognitive decline observed in AD patients. Impairment in spatial navigation is an early feature of the disease but whether basal forebrain dysfunction in AD is responsible for the impaired navigation skills of AD patients is not known. Our objective was to investigate the relationship between basal forebrain volume and performance in real space as well as computer-based navigation paradigms in an elderly cohort comprising cognitively normal controls, subjects with amnestic mild cognitive impairment and those with AD. We also tested whether basal forebrain volume could predict the participants’ ability to perform allocentric- vs. egocentric-based navigation tasks. The basal forebrain volume was calculated from 1.5 T magnetic resonance imaging (MRI) scans, and navigation skills were assessed using the human analog of the Morris water maze employing allocentric, egocentric, and mixed allo/egocentric real space as well as computerized tests. When considering the entire sample, we found that basal forebrain volume correlated with spatial accuracy in allocentric (cued) and mixed allo/egocentric navigation tasks but not the egocentric (uncued) task, demonstrating an important role of the basal forebrain in mediating cue-based spatial navigation capacity. Regression analysis revealed that, although hippocampal volume reflected navigation performance across the entire sample, basal forebrain volume contributed to mixed allo/egocentric navigation performance in the AD group, whereas hippocampal volume did not. This suggests that atrophy of the basal forebrain contributes to aspects of navigation impairment in AD that are independent of hippocampal atrophy. PMID:26441643
Reference frames in allocentric representations are invariant across static and active encoding
Chan, Edgar; Baumann, Oliver; Bellgrove, Mark A.; Mattingley, Jason B.
2013-01-01
An influential model of spatial memory—the so-called reference systems account—proposes that relationships between objects are biased by salient axes (“frames of reference”) provided by environmental cues, such as the geometry of a room. In this study, we sought to examine the extent to which a salient environmental feature influences the formation of spatial memories when learning occurs via a single, static viewpoint and via active navigation, where information has to be integrated across multiple viewpoints. In our study, participants learned the spatial layout of an object array that was arranged with respect to a prominent environmental feature within a virtual arena. Location memory was tested using judgments of relative direction. Experiment 1A employed a design similar to previous studies whereby learning of object-location information occurred from a single, static viewpoint. Consistent with previous studies, spatial judgments were significantly more accurate when made from an orientation that was aligned, as opposed to misaligned, with the salient environmental feature. In Experiment 1B, a fresh group of participants learned the same object-location information through active exploration, which required integration of spatial information over time from a ground-level perspective. As in Experiment 1A, object-location information was organized around the salient environmental cue. Taken together, the findings suggest that the learning condition (static vs. active) does not affect the reference system employed to encode object-location information. Spatial reference systems appear to be a ubiquitous property of spatial representations, and might serve to reduce the cognitive demands of spatial processing. PMID:24009595
Klein, Brennan J; Li, Zhi; Durgin, Frank H
2016-04-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Klein, Brennan J.; Li, Zhi; Durgin, Frank H.
2015-01-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides in order to dissociate egocentric from allocentric reference frames. In Experiment 1 it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. PMID:26594884
Proulx, Michael J.; Todorov, Orlin S.; Taylor Aiken, Amanda; de Sousa, Alexandra A.
2016-01-01
Knowing who we are, and where we are, are two fundamental aspects of our physical and mental experience. Although the domains of spatial and social cognition are often studied independently, a few recent areas of scholarship have explored the interactions of place and self. This fits in with increasing evidence for embodied theories of cognition, where mental processes are grounded in action and perception. Who we are might be integrated with where we are, and impact how we move through space. Individuals vary in personality, navigational strategies, and numerous cognitive and social competencies. Here we review the relation between social and spatial spheres of existence in the realms of philosophical considerations, neural and psychological representations, and evolutionary context, and how we might use the built environment to suit who we are, or how it creates who we are. In particular we investigate how two spatial reference frames, egocentric and allocentric, might transcend into the social realm. We then speculate on how environments may interact with spatial cognition. Finally, we suggest how a framework encompassing spatial and social cognition might be taken in consideration by architects and urban planners. PMID:26903893
Wayfinding and Glaucoma: A Virtual Reality Experiment.
Daga, Fábio B; Macagno, Eduardo; Stevenson, Cory; Elhosseiny, Ahmed; Diniz-Filho, Alberto; Boer, Erwin R; Schulze, Jürgen; Medeiros, Felipe A
2017-07-01
Wayfinding, the process of determining and following a route between an origin and a destination, is an integral part of everyday tasks. The purpose of this study was to investigate the impact of glaucomatous visual field loss on wayfinding behavior using an immersive virtual reality (VR) environment. This cross-sectional study included 31 glaucomatous patients and 20 healthy subjects without evidence of overall cognitive impairment. Wayfinding experiments were modeled after the Morris water maze navigation task and conducted in an immersive VR environment. Two rooms were built varying only in the complexity of the visual scene in order to promote allocentric-based (room A, with multiple visual cues) versus egocentric-based (room B, with single visual cue) spatial representations of the environment. Wayfinding tasks in each room consisted of revisiting previously visible targets that subsequently became invisible. For room A, glaucoma patients spent on average 35.0 seconds to perform the wayfinding task, whereas healthy subjects spent an average of 24.4 seconds (P = 0.001). For room B, no statistically significant difference was seen on average time to complete the task (26.2 seconds versus 23.4 seconds, respectively; P = 0.514). For room A, each 1-dB worse binocular mean sensitivity was associated with 3.4% (P = 0.001) increase in time to complete the task. Glaucoma patients performed significantly worse on allocentric-based wayfinding tasks conducted in a VR environment, suggesting visual field loss may affect the construction of spatial cognitive maps relevant to successful wayfinding. VR environments may represent a useful approach for assessing functional vision endpoints for clinical trials of emerging therapies in ophthalmology.
Castro, Cibele Canal; Dos Reis-Lunardelli, Eleonora Araujo; Schmidt, Werner J; Coitinho, Adriana Simon; Izquierdo, Iván
2007-11-01
Many studies indicate a dissociation between two forms of orientation: allocentric orientation, in which an organism orients on the basis of cues external to the organism, and egocentric spatial orientation (ESO) by which an organism orients on the basis of proprioceptive information. While allocentric orientation is mediated primarily by the hippocampus and its afferent and efferent connections, ESO is mediated by the prefronto-striatal system. Striatal lesions as well as classical neuroleptics, which block dopamine receptors, act through the prefronto-striatal system and impair ESO. The purpose of the present study was to determine the effects of the atypical antipsychotics clozapine, olanzapine and risperidone which are believed to exert its antipsychotic effects mainly by dopaminergic, cholinergic and serotonergic mechanisms. A delayed-two-alternative-choice-task, under conditions that required ESO and at the same time excluded allocentric spatial orientation was used. Clozapine and olanzapine treated rats made more errors than risperidone treated rats in the delayed alternation in comparison with the controls. Motor abilities were not impaired by any of the drugs. Thus, with regard to the delayed alternation requiring ESO, clozapine and olanzapine but not risperidone affects the prefronto-striatal system in a similar way as classical neuroleptics does.
Kudoh, Nobuo
2005-01-01
Walking without vision to previously viewed targets was compared with visual perception of allocentric distance in two experiments. Experimental evidence had shown that physically equal distances in a sagittal plane on the ground were perceptually underestimated as compared with those in a frontoparallel plane, even under full-cue conditions. In spite of this perceptual anisotropy of space, Loomis et al (1992 Journal of Experimental Psychology. Human Perception and Performance 18 906-921) found that subjects could match both types of distances in a blind-walking task. In experiment 1 of the present study, subjects were required to reproduce the extent of allocentric distance between two targets by either walking towards the targets, or by walking in a direction incompatible with the locations of the targets. The latter condition required subjects to derive an accurate allocentric distance from information based on the perceived locations of the two targets. The walked distance in the two conditions was almost identical whether the two targets were presented in depth (depth-presentation condition) or in the frontoparallel plane (width-presentation condition). The results of a perceptual-matching task showed that the depth distances had to be much greater than the width distances in order to be judged to be equal in length (depth compression). In experiment 2, subjects were required to reproduce the extent of allocentric distance from the viewing point by blindly walking in a direction other than toward the targets. The walked distance in the depth-presentation condition was shorter than that in the width-presentation condition. This anisotropy in motor responses, however, was mainly caused by apparent overestimation of length oriented in width, not by depth compression. In addition, the walked distances were much better scaled than those in experiment 1. These results suggest that the perceptual and motor systems share a common representation of the location of targets, whereas a dissociation in allocentric distance exists between the two systems in full-cue conditions.
Locations of serial reach targets are coded in multiple reference frames.
Thompson, Aidan A; Henriques, Denise Y P
2010-12-01
Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5°. But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. Copyright © 2010 Elsevier Ltd. All rights reserved.
Beauchet, Olivier; Launay, Cyrille P; Sekhon, Harmehr; Gautier, Jennifer; Chabot, Julia; Levinoff, Elise J; Allali, Gilles
2018-01-01
Assessment of changes in higher levels of gait control with aging is important to better understand age-related gait instability, with the perspective to improve the screening of individuals at risk for falls. The comparison between actual Timed Up and Go test (aTUG) and its imagined version (iTUG) is a simple clinical way to assess age-related changes in gait control. The modulations of iTUG performances by body positions and motor imagery (MI) strategies with normal aging have not been evaluated yet. This study aims 1) to compare the aTUG time with the iTUG time under different body positions (i.e., sitting, standing or supine) in healthy young and middle age, and older adults, and 2) to examine the associations of body positions and MI strategies (i.e., egocentric versus allocentric) with the time needed to complete the iTUG and the delta TUG time (i.e., relative difference between aTUG and iTUG) while taking into consideration clinical characteristics of participants. A total of 60 healthy individuals (30 young and middle age participants 26.6±7.4 years, and 30 old participants 75.0±4.4 years) were recruited in this cross-sectional study. The iTUG was performed while sitting, standing and in supine position. Times of the aTUG, the iTUG under the three body positions, the TUG delta time and the strategies of MI (i.e., ego representation, defined as representation of the location of objects in space relative to the body axes of the self, versus allocentric representation defined as encoding information about body movement with respect to other object, the location of body being defined relative to the location of other objects) were used as outcomes. Age, sex, height, weight, number of drugs taken daily, level of physical activity and prevalence of closed eyes while performing iTUG were recorded. The aTUG time is significantly greater than iTUG while sitting and standing (P<0.001), except when older participants are standing. A significant difference is reported between iTUG while sitting or standing and iTUG while supine (P≤0.002), higher time being reported in supine position. The multiple linear regressions confirm that the supine position is associated with significant increased iTUG (P≤0.04) and decreased TUG delta time (P≤0.010), regardless of the adjustment. Older participants use the allocentric MI while imagining TUG more frequently than young and middle age participants, regardless of body positions (P≤0.001). Allocentric MI strategy is associated with a significant decrease in iTUG (P = 0.037) only while adjusting for age. A significant increase of iTUG time is associated with age (P≤0.026). Supine position while imagining TUG represents a more accurate position of actual performance of TUG. Age has a limited effect on iTUG performance but is associated with a change in MI from ego to allocentric representation that decreases the iTUG performances, and thus increases the discrepancy with aTUG.
Differential effects of non-informative vision and visual interference on haptic spatial processing
van Rheede, Joram J.; Postma, Albert; Kappers, Astrid M. L.
2008-01-01
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality. PMID:18553074
The oblique effect is both allocentric and egocentric
Mikellidou, Kyriaki; Cicchini, Guido Marco; Thompson, Peter G.; Burr, David C.
2016-01-01
Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect—superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)—is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple “compulsory fusion” model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination. PMID:26129862
Sex Differences in a Human Analogue of the Radial Arm Maze: The ''17-Box Maze Test''
ERIC Educational Resources Information Center
Rahman, Q.; Abrahams, S.; Jussab, F.
2005-01-01
This study investigated sex differences in spatial memory using a human analogue of the Radial Arm Maze: a revision on the Nine Box Maze originally developed by Abrahams, Pickering, Polkey, and Morris (1997) called the 17-Box Maze Test herein. The task encourages allocentric spatial processing, dissociates object from spatial memory, and…
ERIC Educational Resources Information Center
De Leonibus, Elvira; Oliverio, Alberto; Mele, Andrea
2005-01-01
There is now accumulating evidence that the striatal complex in its two major components, the dorsal striatum and the nucleus accumbens, contributes to spatial memory. However, the possibility that different striatal subregions might modulate specific aspects of spatial navigation has not been completely elucidated. Therefore, in this study, two…
Allocentric versus Egocentric Spatial Memory in Adults with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Ring, Melanie; Gaigg, Sebastian B.; Altgassen, Mareike; Barr, Peter; Bowler, Dermot M.
2018-01-01
Individuals with autism spectrum disorder (ASD) present difficulties in forming relations among items and context. This capacity for relational binding is also involved in spatial navigation and research on this topic in ASD is scarce and inconclusive. Using a computerised version of the Morris Water Maze task, ASD participants showed particular…
Fluoxetine Restores Spatial Learning but Not Accelerated Forgetting in Mesial Temporal Lobe Epilepsy
ERIC Educational Resources Information Center
Barkas, Lisa; Redhead, Edward; Taylor, Matthew; Shtaya, Anan; Hamilton, Derek A.; Gray, William P.
2012-01-01
Learning and memory dysfunction is the most common neuropsychological effect of mesial temporal lobe epilepsy, and because the underlying neurobiology is poorly understood, there are no pharmacological strategies to help restore memory function in these patients. We have demonstrated impairments in the acquisition of an allocentric spatial task,…
ERIC Educational Resources Information Center
Belmonti, Vittorio; Cioni, Giovanni; Berthoz, Alain
2015-01-01
Navigational and reaching spaces are known to involve different cognitive strategies and brain networks, whose development in humans is still debated. In fact, high-level spatial processing, including allocentric location encoding, is already available to very young children, but navigational strategies are not mature until late childhood. The…
Age-related similarities and differences in monitoring spatial cognition.
Ariel, Robert; Moffat, Scott D
2018-05-01
Spatial cognitive performance is impaired in later adulthood but it is unclear whether the metacognitive processes involved in monitoring spatial cognitive performance are also compromised. Inaccurate monitoring could affect whether people choose to engage in tasks that require spatial thinking and also the strategies they use in spatial domains such as navigation. The current experiment examined potential age differences in monitoring spatial cognitive performance in a variety of spatial domains including visual-spatial working memory, spatial orientation, spatial visualization, navigation, and place learning. Younger and older adults completed a 2D mental rotation test, 3D mental rotation test, paper folding test, spatial memory span test, two virtual navigation tasks, and a cognitive mapping test. Participants also made metacognitive judgments of performance (confidence judgments, judgments of learning, or navigation time estimates) on each trial for all spatial tasks. Preference for allocentric or egocentric navigation strategies was also measured. Overall, performance was poorer and confidence in performance was lower for older adults than younger adults. In most spatial domains, the absolute and relative accuracy of metacognitive judgments was equivalent for both age groups. However, age differences in monitoring accuracy (specifically relative accuracy) emerged in spatial tasks involving navigation. Confidence in navigating for a target location also mediated age differences in allocentric navigation strategy use. These findings suggest that with the possible exception of navigation monitoring, spatial cognition may be spared from age-related decline even though spatial cognition itself is impaired in older age.
Visuospatial memory computations during whole-body rotations in roll.
Van Pelt, S; Van Gisbergen, J A M; Medendorp, W P
2005-08-01
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.
Different strategies for spatial updating in yaw and pitch path integration
Goeke, Caspar M.; König, Peter; Gramann, Klaus
2013-01-01
Research in spatial navigation revealed the existence of discrete strategies defined by the use of distinct reference frames during virtual path integration. The present study investigated the distribution of these navigation strategies as a function of gender, video gaming experience, and self-estimates of spatial navigation abilities in a population of 300 subjects. Participants watched videos of virtual passages through a star-field with one turn in either the horizontal (yaw) or the vertical (pitch) axis. At the end of a passage they selected one out of four homing arrows to indicate the initial starting location. To solve the task, participants could employ two discrete strategies, navigating within either an egocentric or an allocentric reference frame. The majority of valid subjects (232/260) consistently used the same strategy in more than 75% of all trials. With that approach 33.1% of all participants were classified as Turners (using an egocentric reference frame on both axes) and 46.5% as Non-turners (using an allocentric reference frame on both axes). 9.2% of all participants consistently used an egocentric reference frame in the yaw plane but an allocentric reference frame in the pitch plane (Switcher). Investigating the influence of gender on navigation strategies revealed that females predominantly used the Non-turner strategy while males used both the Turner and the Non-turner strategy with comparable probabilities. Other than expected, video gaming experience did not influence strategy use. Based on a strong quantitative basis with the sample size about an order of magnitude larger than in typical psychophysical studies these results demonstrate that most people reliably use one out of three possible navigation strategies (Turners, Non-turners, Switchers) for spatial updating and provides a sound estimate of how those strategies are distributed within the general population. PMID:23412683
Burger, Tomáš; Lucová, Marcela; Moritz, Regina E.; Oelschläger, Helmut H. A.; Druga, Rastislav; Burda, Hynek; Wiltschko, Wolfgang; Wiltschko, Roswitha; Němec, Pavel
2010-01-01
The neural substrate subserving magnetoreception and magnetic orientation in mammals is largely unknown. Previous experiments have demonstrated that the processing of magnetic sensory information takes place in the superior colliculus. Here, the effects of magnetic field conditions on neuronal activity in the rodent navigation circuit were assessed by quantifying c-Fos expression. Ansell's mole-rats (Fukomys anselli), a mammalian model to study the mechanisms of magnetic compass orientation, were subjected to natural, periodically changing, and shielded magnetic fields while exploring an unfamiliar circular arena. In the undisturbed local geomagnetic field, the exploration of the novel environment and/or nesting behaviour induced c-Fos expression throughout the head direction system and the entorhinal–hippocampal spatial representation system. This induction was significantly suppressed by exposure to periodically changing and/or shielded magnetic fields; discrete decreases in c-Fos were seen in the dorsal tegmental nucleus, the anterodorsal and the laterodorsal thalamic nuclei, the postsubiculum, the retrosplenial and entorhinal cortices, and the hippocampus. Moreover, in inactive animals, magnetic field intensity manipulation suppressed c-Fos expression in the CA1 and CA3 fields of the hippocampus and the dorsal subiculum, but induced expression in the polymorph layer of the dentate gyrus. These findings suggest that key constituents of the rodent navigation circuit contain populations of neurons responsive to magnetic stimuli. Thus, magnetic information may be integrated with multimodal sensory and motor information into a common spatial representation of allocentric space within this circuit. PMID:20219838
Burger, Tomás; Lucová, Marcela; Moritz, Regina E; Oelschläger, Helmut H A; Druga, Rastislav; Burda, Hynek; Wiltschko, Wolfgang; Wiltschko, Roswitha; Nemec, Pavel
2010-09-06
The neural substrate subserving magnetoreception and magnetic orientation in mammals is largely unknown. Previous experiments have demonstrated that the processing of magnetic sensory information takes place in the superior colliculus. Here, the effects of magnetic field conditions on neuronal activity in the rodent navigation circuit were assessed by quantifying c-Fos expression. Ansell's mole-rats (Fukomys anselli), a mammalian model to study the mechanisms of magnetic compass orientation, were subjected to natural, periodically changing, and shielded magnetic fields while exploring an unfamiliar circular arena. In the undisturbed local geomagnetic field, the exploration of the novel environment and/or nesting behaviour induced c-Fos expression throughout the head direction system and the entorhinal-hippocampal spatial representation system. This induction was significantly suppressed by exposure to periodically changing and/or shielded magnetic fields; discrete decreases in c-Fos were seen in the dorsal tegmental nucleus, the anterodorsal and the laterodorsal thalamic nuclei, the postsubiculum, the retrosplenial and entorhinal cortices, and the hippocampus. Moreover, in inactive animals, magnetic field intensity manipulation suppressed c-Fos expression in the CA1 and CA3 fields of the hippocampus and the dorsal subiculum, but induced expression in the polymorph layer of the dentate gyrus. These findings suggest that key constituents of the rodent navigation circuit contain populations of neurons responsive to magnetic stimuli. Thus, magnetic information may be integrated with multimodal sensory and motor information into a common spatial representation of allocentric space within this circuit.
Retrograde and anterograde memory following selective damage to the dorsolateral entorhinal cortex.
Gervais, Nicole J; Barrett-Bernstein, Meagan; Sutherland, Robert J; Mumby, Dave G
2014-12-01
Anatomical and electrophysiological evidence suggest the dorsolateral entorhinal cortex (DLEC) is involved in processing spatial information, but there is currently no consensus on whether its functions are necessary for normal spatial learning and memory. The present study examined the effects of excitotoxic lesions of the DLEC on retrograde and anterograde memory on two tests of allocentric spatial learning: a hidden fixed-platform watermaze task, and a novelty-preference-based dry-maze test. Deficits were observed on both tests when training occurred prior to but not following n-methyl d-aspartate (NMDA) lesions of DLEC, suggesting retrograde memory impairment in the absence of anterograde impairments for the same information. The retrograde memory impairments were temporally-graded; rats that received DLEC lesions 1-3 days following training displayed deficits, while those that received lesions 7-10 days following training performed like a control group that received sham surgery. The deficits were not attenuated by co-infusion of tetrodotoxin, suggesting they are not due to disruption of neural processing in structures efferent to the DLEC, such as the hippocampus. The present findings provide evidence that the DLEC is involved in the consolidation of allocentric spatial information. Copyright © 2014 Elsevier Inc. All rights reserved.
The Alliance Hypothesis for Human Friendship
DeScioli, Peter; Kurzban, Robert
2009-01-01
Background Exploration of the cognitive systems underlying human friendship will be advanced by identifying the evolved functions these systems perform. Here we propose that human friendship is caused, in part, by cognitive mechanisms designed to assemble support groups for potential conflicts. We use game theory to identify computations about friends that can increase performance in multi-agent conflicts. This analysis suggests that people would benefit from: 1) ranking friends, 2) hiding friend-ranking, and 3) ranking friends according to their own position in partners' rankings. These possible tactics motivate the hypotheses that people possess egocentric and allocentric representations of the social world, that people are motivated to conceal this information, and that egocentric friend-ranking is determined by allocentric representations of partners' friend-rankings (more than others' traits). Methodology/Principal Findings We report results from three studies that confirm predictions derived from the alliance hypothesis. Our main empirical finding, replicated in three studies, was that people's rankings of their ten closest friends were predicted by their own perceived rank among their partners' other friends. This relationship remained strong after controlling for a variety of factors such as perceived similarity, familiarity, and benefits. Conclusions/Significance Our results suggest that the alliance hypothesis merits further attention as a candidate explanation for human friendship. PMID:19492066
Development of the Hippocampal Cognitive Map in Pre-weanling Rats
Wills, Tom; Cacucci, Francesca; Burgess, Neil; O’Keefe, John
2011-01-01
Orienting in large-scale space depends on the interaction of environmental experience and pre-configured, possibly innate, constructs. Place, head-direction and grid cells in the hippocampal formation provide allocentric representations of space. Here we show how these cognitive representations emerge and develop as rat pups first begin to explore their environment. Directional, locational and rhythmic organization of firing are present during initial exploration, including adult-like directional firing. The stability and precision of place cell firing continues to develop throughout juvenility. Stable grid cell firing appears later but matures rapidly to adult levels. Our results demonstrate the presence of three neuronal representations of space prior to extensive experience, and show how they develop with age. PMID:20558720
Development of Allocentric Spatial Recall from New Viewpoints in Virtual Reality
ERIC Educational Resources Information Center
Negen, James; Heywood-Everett, Edward; Roome, Hannah E.; Nardini, Marko
2018-01-01
Using landmarks and other scene features to recall locations from new viewpoints is a critical skill in spatial cognition. In an immersive virtual reality task, we asked children 3.5-4.5 years old to remember the location of a target using various cues. On some trials they could use information from their own self-motion. On some trials they could…
Family allocentrism and its relation to adjustment among Chinese and Italian adolescents.
Li, Jian-Bin; Delvecchio, Elisa; Lis, Adriana; Mazzeschi, Claudia
2018-03-21
Family allocentrism is a domain-specific collectivistic attribute referring to the family. This research tested the one-factor structure of the Family Allocentrism Scale (FAS), examined the association between family allocentrism and adjustment outcomes, and compared the factor means and the correlations with adjustment between Chinese and Italian adolescents. To this end, 484 Chinese and 480 Italian adolescents participated in the study by answering a battery of self-report measures. The results confirmed the one-factor structure of the FAS. Family allocentrism was related to a number of adjustment outcomes. More importantly, Chinese adolescents reported more family allocentrism than their Italian counterparts did, but the relations between family allocentrism and adjustment outcomes were equivalent in magnitude between the two samples. Collectively, these findings provide crucial evidence for the psychometric properties of the FAS and shed light on the importance of family allocentrism in promoting positive youth development from a cross-cultural perspective. Copyright © 2018 Elsevier B.V. All rights reserved.
Framing of grid cells within and beyond navigation boundaries
Savelli, Francesco; Luck, JD; Knierim, James J
2017-01-01
Grid cells represent an ideal candidate to investigate the allocentric determinants of the brain’s cognitive map. Most studies of grid cells emphasized the roles of geometric boundaries within the navigational range of the animal. Behaviors such as novel route-taking between local environments indicate the presence of additional inputs from remote cues beyond the navigational borders. To investigate these influences, we recorded grid cells as rats explored an open-field platform in a room with salient, remote cues. The platform was rotated or translated relative to the room frame of reference. Although the local, geometric frame of reference often exerted the strongest control over the grids, the remote cues demonstrated a consistent, sometimes dominant, countervailing influence. Thus, grid cells are controlled by both local geometric boundaries and remote spatial cues, consistent with prior studies of hippocampal place cells and providing a rich representational repertoire to support complex navigational (and perhaps mnemonic) processes. DOI: http://dx.doi.org/10.7554/eLife.21354.001 PMID:28084992
Gravity orientation tuning in macaque anterior thalamus.
Laurens, Jean; Kim, Byounghoon; Dickman, J David; Angelaki, Dora E
2016-12-01
Gravity may provide a ubiquitous allocentric reference to the brain's spatial orientation circuits. Here we describe neurons in the macaque anterior thalamus tuned to pitch and roll orientation relative to gravity, independently of visual landmarks. We show that individual cells exhibit two-dimensional tuning curves, with peak firing rates at a preferred vertical orientation. These results identify a thalamic pathway for gravity cues to influence perception, action and spatial cognition.
Cultural background shapes spatial reference frame proclivity
Goeke, Caspar; Kornpetpanee, Suchada; Köster, Moritz; Fernández-Revelles, Andrés B.; Gramann, Klaus; König, Peter
2015-01-01
Spatial navigation is an essential human skill that is influenced by several factors. The present study investigates how gender, age, and cultural background account for differences in reference frame proclivity and performance in a virtual navigation task. Using an online navigation study, we recorded reaction times, error rates (confusion of turning axis), and reference frame proclivity (egocentric vs. allocentric reference frame) of 1823 participants. Reaction times significantly varied with gender and age, but were only marginally influenced by the cultural background of participants. Error rates were in line with these results and exhibited a significant influence of gender and culture, but not age. Participants’ cultural background significantly influenced reference frame selection; the majority of North-Americans preferred an allocentric strategy, while Latin-Americans preferred an egocentric navigation strategy. European and Asian groups were in between these two extremes. Neither the factor of age nor the factor of gender had a direct impact on participants’ navigation strategies. The strong effects of cultural background on navigation strategies without the influence of gender or age underlines the importance of socialized spatial cognitive processes and argues for socio-economic analysis in studies investigating human navigation. PMID:26073656
THE LIMITED EFFECT OF COINCIDENT ORIENTATION ON THE CHOICE OF INTRINSIC AXIS (.).
Li, Jing; Su, Wei
2015-06-01
The allocentric system computes and represents general object-to-object spatial relationships to provide a spatial frame of reference other than the egocentric system. The intrinsic frame-of-reference system theory, which suggests people learn the locations of objects based upon an intrinsic axis, is important in research about the allocentric system. The purpose of the current study was to determine whether the effect of coincident orientation on the choice of intrinsic axis was limited. Two groups of participants (24 men, 24 women; M age = 24 yr., SD = 2) encoded different spatial layouts in which the objects shared the coincident orientation of 315° and 225° separately at learning perspective (0°). The response pattern of partial-scene-recognition task following learning reflected different strategies for choosing the intrinsic axis under different conditions. Under the 315° object-orientation condition, the objects' coincident orientation was as important as the symmetric axis in the choice of the intrinsic axis. However, participants were more likely to choose the symmetric axis as the intrinsic axis under the 225° object-orientation condition. The results suggest the effect of coincident orientation on the choice of intrinsic axis is limited.
Stepping into a Map: Initial Heading Direction Influences Spatial Memory Flexibility
ERIC Educational Resources Information Center
Gagnon, Stephanie A.; Brunyé, Tad T.; Gardony, Aaron; Noordzij, Matthijs L.; Mahoney, Caroline R.; Taylor, Holly A.
2014-01-01
Learning a novel environment involves integrating first-person perceptual and motoric experiences with developing knowledge about the overall structure of the surroundings. The present experiments provide insights into the parallel development of these egocentric and allocentric memories by intentionally conflicting body- and world-centered frames…
Spatiotopic coding during dynamic head tilt
Turi, Marco; Burr, David C.
2016-01-01
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding. NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation. PMID:27903636
Jo, Yong Sang; Choi, June-Seek
2014-03-01
The medial prefrontal cortex (mPFC) has been suggested to play a crucial role in retrieving detailed contextual information about a previous learning episode in response to a single retrieval cue. However, few studies investigated the neurochemical mechanisms that mediate the prefrontal retrieval process. In the current study, we examined whether N-methyl-D-aspartate receptors (NMDARs) in the mPFC were necessary for retrieval of a well-learned spatial location on the basis of partial or degraded spatial cues. Rats were initially trained to find a hidden platform in the Morris water maze using four extramaze cues in the surrounding environment. Their retrieval performance was subsequently tested under different cue conditions. Infusions of DL-2-amino-5-phosphonovaleric acid (APV), a NMDAR antagonist, significantly disrupted memory retrieval when three of the original cues were removed. By contrast, APV injections into the mPFC did not affect animals' retrieval performance when the original cues were presented or when three novels landmarks were added alongside the original cues. These results indicate that prefrontal NMDARs are required for memory retrieval when allocentric spatial information is degraded. NMDAR-dependent neurotransmission in the mPFC may facilitate an active retrieval process to reactivate complete contextual representations associated with partial retrieval cues. Copyright © 2013 Elsevier Inc. All rights reserved.
Plank, Markus; Snider, Joseph; Kaestner, Erik; Halgren, Eric; Poizner, Howard
2015-02-01
Using a novel, fully mobile virtual reality paradigm, we investigated the EEG correlates of spatial representations formed during unsupervised exploration. On day 1, subjects implicitly learned the location of 39 objects by exploring a room and popping bubbles that hid the objects. On day 2, they again popped bubbles in the same environment. In most cases, the objects hidden underneath the bubbles were in the same place as on day 1. However, a varying third of them were misplaced in each block. Subjects indicated their certainty that the object was in the same location as the day before. Compared with bubble pops revealing correctly placed objects, bubble pops revealing misplaced objects evoked a decreased negativity starting at 145 ms, with scalp topography consistent with generation in medial parietal cortex. There was also an increased negativity starting at 515 ms to misplaced objects, with scalp topography consistent with generation in inferior temporal cortex. Additionally, misplaced objects elicited an increase in frontal midline theta power. These findings suggest that the successive neurocognitive stages of processing allocentric space may include an initial template matching, integration of the object within its spatial cognitive map, and memory recall, analogous to the processing negativity N400 and theta that support verbal cognitive maps in humans. Copyright © 2015 the American Physiological Society.
Evaluation of a conceptual framework for predicting navigation performance in virtual reality.
Grübel, Jascha; Thrash, Tyler; Hölscher, Christoph; Schinazi, Victor R
2017-01-01
Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition.
Evaluation of a conceptual framework for predicting navigation performance in virtual reality
Thrash, Tyler; Hölscher, Christoph; Schinazi, Victor R.
2017-01-01
Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition. PMID:28915266
Preschool children's proto-episodic memory assessed by deferred imitation.
Burns, Patrick; Russell, Charlotte; Russell, James
2015-01-01
In two experiments, both employing deferred imitation, we studied the developmental origins of episodic memory in two- to three-year-old children by adopting a "minimalist" view of episodic memory based on its What-When-Where ("WWW": spatiotemporal plus semantic) content. We argued that the temporal element within spatiotemporal should be the order/simultaneity of the event elements, but that it is not clear whether the spatial content should be egocentric or allocentric. We also argued that episodic recollection should be configural (tending towards all-or-nothing recall of the WWW elements). Our first deferred imitation experiment, using a two-dimensional (2D) display, produced superior-to-chance performance after 2.5 years but no evidence of configural memory. Moreover, performance did not differ from that on a What-What-What control task. Our second deferred imitation study required the children to reproduce actions on an object in a room, thereby affording layout-based spatial cues. In this case, not only was there superior-to-chance performance after 2.5 years but memory was also configural at both ages. We discuss the importance of allocentric spatial cues in episodic recall in early proto-episodic memory and reflect on the possible role of hippocampal development in this process.
Fuss, Theodora; Bleckmann, Horst; Schluessel, Vera
2014-01-01
This study assessed complex spatial learning and memory in two species of shark, the grey bamboo shark (Chiloscyllium griseum) and the coral cat shark (Atelomycterus marmoratus). It was hypothesized that sharks can learn and apply an allocentric orientation strategy. Eight out of ten sharks successfully completed the initial training phase (by locating a fixed goal position in a diamond maze from two possible start points) within 14.9 ± 7.6 sessions and proceeded to seven sets of transfer tests, in which sharks had to perform under altered environmental conditions. Transfer tests revealed that sharks had oriented and solved the tasks visually, using all of the provided environmental cues. Unintentional cueing did not occur. Results correspond to earlier studies on spatial memory and cognitive mapping in other vertebrates. Future experiments should investigate whether sharks possess a cognitive spatial mapping system as has already been found in several teleosts and stingrays. Following the completion of transfer tests, sharks were subjected to ablation of most of the pallium, which compromised their previously acquired place learning abilities. These results indicate that the telencephalon plays a crucial role in the processing of information on place learning and allocentric orientation strategies.
Spatial Updating Strategy Affects the Reference Frame in Path Integration.
He, Qiliang; McNamara, Timothy P
2018-06-01
This study investigated how spatial updating strategies affected the selection of reference frames in path integration. Participants walked an outbound path consisting of three successive waypoints in a featureless environment and then pointed to the first waypoint. We manipulated the alignment of participants' final heading at the end of the outbound path with their initial heading to examine the adopted reference frame. We assumed that the initial heading defined the principal reference direction in an allocentric reference frame. In Experiment 1, participants were instructed to use a configural updating strategy and to monitor the shape of the outbound path while they walked it. Pointing performance was best when the final heading was aligned with the initial heading, indicating the use of an allocentric reference frame. In Experiment 2, participants were instructed to use a continuous updating strategy and to keep track of the location of the first waypoint while walking the outbound path. Pointing performance was equivalent regardless of the alignment between the final and the initial headings, indicating the use of an egocentric reference frame. These results confirmed that people could employ different spatial updating strategies in path integration (Wiener, Berthoz, & Wolbers Experimental Brain Research 208(1) 61-71, 2011), and suggested that these strategies could affect the selection of the reference frame for path integration.
Visual Spatial Cognition in Neurodegenerative Disease
Possin, Katherine L.
2011-01-01
Visual spatial impairment is often an early symptom of neurodegenerative disease; however, this multi-faceted domain of cognition is not well-assessed by most typical dementia evaluations. Neurodegenerative diseases cause circumscribed atrophy in distinct neural networks, and accordingly, they impact visual spatial cognition in different and characteristic ways. Anatomically-focused visual spatial assessment can assist the clinician in making an early and accurate diagnosis. This article will review the literature on visual spatial cognition in neurodegenerative disease clinical syndromes, and where research is available, by neuropathologic diagnoses. Visual spatial cognition will be organized primarily according to the following schemes: bottom-up / top-down processing, dorsal / ventral stream processing, and egocentric / allocentric frames of reference. PMID:20526954
Mohammadi, Alireza; Kargar, Mahmoud; Hesami, Ehsan
2018-03-01
Spatial disorientation is a hallmark of amnestic mild cognitive impairment (aMCI) and Alzheimer's disease. Our aim was to use virtual reality to determine the allocentric and egocentric memory deficits of subjects with single-domain aMCI (aMCIsd) and multiple-domain aMCI (aMCImd). For this purpose, we introduced an advanced virtual reality navigation task (VRNT) to distinguish these deficits in mild Alzheimer's disease (miAD), aMCIsd, and aMCImd. The VRNT performance of 110 subjects, including 20 with miAD, 30 with pure aMCIsd, 30 with pure aMCImd, and 30 cognitively normal controls was compared. Our newly developed VRNT consists of a virtual neighbourhood (allocentric memory) and virtual maze (egocentric memory). Verbal and visuospatial memory impairments were also examined with Rey Auditory-Verbal Learning Test and Rey-Osterrieth Complex Figure Test, respectively. We found that miAD and aMCImd subjects were impaired in both allocentric and egocentric memory, but aMCIsd subjects performed similarly to the normal controls on both tasks. The miAD, aMCImd, and aMCIsd subjects performed worse on finding the target or required more time in the virtual environment than the aMCImd, aMCIsd, and normal controls, respectively. Our findings indicated the aMCImd and miAD subjects, as well as the aMCIsd subjects, were more impaired in egocentric orientation than allocentric orientation. We concluded that VRNT can distinguish aMCImd subjects, but not aMCIsd subjects, from normal elderly subjects. The VRNT, along with the Rey Auditory-Verbal Learning Test and Rey-Osterrieth Complex Figure Test, can be used as a valid diagnostic tool for properly distinguishing different forms of aMCI. © 2018 Japanese Psychogeriatric Society.
Memory for Complex Visual Objects but Not for Allocentric Locations during the First Year of Life
ERIC Educational Resources Information Center
Dupierrix, Eve; Hillairet de Boisferon, Anne; Barbeau, Emmanuel; Pascalis, Olivier
2015-01-01
Although human infants demonstrate early competence to retain visual information, memory capacities during infancy remain largely undocumented. In three experiments, we used a Visual Paired Comparison (VPC) task to examine abilities to encode identity (Experiment 1) and spatial properties (Experiments 2a and 2b) of unfamiliar complex visual…
ERIC Educational Resources Information Center
Dabul, Amy J.; And Others
1995-01-01
Posits a distinction between cultures motivated by individualistic value systems (idiocentric) and collectivistic value systems (allocentric). Study reveals that Mexican American adolescents describe themselves in more allocentric terms, while Anglo American adolescents choose idiocentric terms. Suggests a correlation between idiocentric values…
Conscious experience and episodic memory: hippocampus at the crossroads.
Behrendt, Ralf-Peter
2013-01-01
If an instance of conscious experience of the seemingly objective world around us could be regarded as a newly formed event memory, much as an instance of mental imagery has the content of a retrieved event memory, and if, therefore, the stream of conscious experience could be seen as evidence for ongoing formation of event memories that are linked into episodic memory sequences, then unitary conscious experience could be defined as a symbolic representation of the pattern of hippocampal neuronal firing that encodes an event memory - a theoretical stance that may shed light into the mind-body and binding problems in consciousness research. Exceedingly detailed symbols that describe patterns of activity rapidly self-organizing, at each cycle of the θ rhythm, in the hippocampus are instances of unitary conscious experience that jointly constitute the stream of consciousness. Integrating object information (derived from the ventral visual stream and orbitofrontal cortex) with contextual emotional information (from the anterior insula) and spatial environmental information (from the dorsal visual stream), the hippocampus rapidly forms event codes that have the informational content of objects embedded in an emotional and spatiotemporally extending context. Event codes, formed in the CA3-dentate network for the purpose of their memorization, are not only contextualized but also allocentric representations, similarly to conscious experiences of events and objects situated in a seemingly objective and observer-independent framework of phenomenal space and time. Conscious perception, creating the spatially and temporally extending world that we perceive around us, is likely to be evolutionarily related to more fleeting and seemingly internal forms of conscious experience, such as autobiographical memory recall, mental imagery, including goal anticipation, and to other forms of externalized conscious experience, namely dreaming and hallucinations; and evidence pointing to an important contribution of the hippocampus to these conscious phenomena will be reviewed.
Orientation and metacognition in virtual space.
Tenbrink, Thora; Salwiczek, Lucie H
2016-05-01
Cognitive scientists increasingly use virtual reality scenarios to address spatial perception, orientation, and navigation. If based on desktops rather than mobile immersive environments, this involves a discrepancy between the physically experienced static position and the visually perceived dynamic scene, leading to cognitive challenges that users of virtual worlds may or may not be aware of. The frequently reported loss of orientation and worse performance in point-to-origin tasks relate to the difficulty of establishing a consistent reference system on an allocentric or egocentric basis. We address the verbalizability of spatial concepts relevant in this regard, along with the conscious strategies reported by participants. Behavioral and verbal data were collected using a perceptually sparse virtual tunnel scenario that has frequently been used to differentiate between humans' preferred reference systems. Surprisingly, the linguistic data we collected relate to reference system verbalizations known from the earlier literature only to a limited extent, but instead reveal complex cognitive mechanisms and strategies. Orientation in desktop virtual reality appears to pose considerable challenges, which participants react to by conceptualizing the task in individual ways that do not systematically relate to the generic concepts of egocentric and allocentric reference frames. (c) 2016 APA, all rights reserved).
An Active System for Visually-Guided Reaching in 3D across Binocular Fixations
2014-01-01
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295
Brown, Franklin C; Roth, Robert M; Katz, Lynda J
2015-08-30
Attention Deficit Hyperactivity Disorder (ADHD) has often been conceptualized as arising executive dysfunctions (e.g., inattention, defective inhibition). However, recent studies suggested that cognitive inefficiency may underlie many ADHD symptoms, according to reaction time and processing speed abnormalities. This study explored whether a non-timed measure of cognitive inefficiency would also be abnormal. A sample of 23 ADHD subjects was compared to 23 controls on a test that included both egocentric and allocentric visual memory subtests. A factor analysis was used to determine which cognitive variables contributed to allocentric visual memory. The ADHD sample performed significantly lower on the allocentric but not egocentric conditions. Allocentric visual memory was not associated with timed, working memory, visual perception, or mental rotation variables. This paper concluded by discussing how these results supported a cognitive inefficiency explanation for some ADHD symptoms, and discussed future research directions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Language and spatial frames of reference in mind and brain.
Gallistel, C R.
2002-08-01
Some language communities routinely use allocentric reference directions (e.g. 'uphill-downhill') where speakers of European languages would use egocentric references ('left-right'). Previous experiments have suggested that the different language groups use different reference frames in non-linguistic tasks involving the recreation of oriented arrays. However, a recent paper argues that manipulating test conditions produces similar effects in monolingual English speakers, and in animals.
la Cour, L. T.; Stone, B. W.; Hopkins, W.; Menzel, C.; Fragaszy, D.
2013-01-01
Perceptuomotor functions that support using hand tools can be examined in other manipulation tasks, such as alignment of objects to surfaces. We examined tufted capuchin monkeys’ and chimpanzees’ performance at aligning objects to surfaces while managing one or two spatial relations to do so. We presented 6 subjects of each species with a single stick to place into a groove, two sticks of equal length to place into two grooves, or two sticks joined as a T to place into a T-shaped groove. Tufted capuchins and chimpanzees performed equivalently on these tasks, aligning the straight stick to within 22.5° of parallel to the groove in approximately half of their attempts to place it, and taking more attempts to place the T stick than two straight sticks. The findings provide strong evidence that tufted capuchins and chimpanzees do not reliably align even one prominent axial feature of an object to a surface, and that managing two concurrent allocentric spatial relations in an alignment problem is significantly more challenging to them than managing two sequential relations. In contrast, humans from two years of age display very different perceptuomotor abilities in a similar task: they align sticks to a groove reliably on each attempt, and they readily manage two allocentric spatial relations concurrently. Limitations in aligning objects and in managing two or more relations at a time significantly constrain how nonhuman primates can use hand tools. PMID:23820935
Rosenbaum, R Shayna; Ziegler, Marilyne; Winocur, Gordon; Grady, Cheryl L; Moscovitch, Morris
2004-01-01
The role of the hippocampus in recent spatial memory has been well documented in patients with damage to this structure, but there is now evidence that the hippocampus may not be needed for the storage and recovery of a spatial layout that was experienced long before injury. Such preservation may rely, instead, on a network of dissociable, extra-hippocampal regions implicated in topographical orientation. Using functional magnetic resonance imaging (fMRI), we investigated this hypothesis in healthy individuals with extensive experience navigating in a large-scale urban environment (downtown Toronto). Participants were scanned as they performed mental navigation tasks that emphasized different types of spatial representations. Tasks included proximity judgments, distance judgments, landmark sequencing, and blocked-route problem-solving. The following regions were engaged to varying degrees depending on the processing demands of each task: retrosplenial cortex, believed to be involved in assigning directional significance to locales within a relatively allocentric framework; medial and posterior parietal cortex, concerned with processing space within egocentric coordinates during imagined movement; and regions of prefrontal cortex, present in tasks heavily dependent on working memory. In a second, event-related experiment, a distinct area of inferotemporal cortex was revealed during identification of familiar landmarks relative to unknown buildings in addition to activation of many of those regions identified in the navigation tasks. This result suggests that familiar landmarks are strongly integrated with the spatial context in which they were experienced. Importantly, right medial temporal lobe activity was observed, its magnitude equivalent across all tasks, though the core of the activated region was in the parahippocampal gyrus, barely touching the hippocampus proper. Copyright 2004 Wiley-Liss, Inc.
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Mohammadi, Alireza; Hesami, Ehsan; Kargar, Mahmoud; Shams, Jamal
2018-04-01
Present evidence suggests that the use of virtual reality has great advantages in evaluating visuospatial navigation and memory for the diagnosis of psychiatric or other neurological disorders. There are a few virtual reality studies on allocentric and egocentric memories in schizophrenia, but studies on both memories in bipolar disorder are lacking. The objective of this study was to compare the performance of allocentric and egocentric memories in patients with schizophrenia and bipolar disorder. For this resolve, an advanced virtual reality navigation task (VRNT) was presented to distinguish the navigational performances of these patients. Twenty subjects with schizophrenia and 20 bipolar disorder patients were compared with 20 healthy-matched controls on the newly developed VRNT consisting of a virtual neighbourhood (allocentric memory) and a virtual maze (egocentric memory). The results demonstrated that schizophrenia patients were significantly impaired on all allocentric, egocentric, visual, and verbal memory tasks compared with patients with bipolar disorder and normal subjects. Dissimilarly, the performance of patients with bipolar disorder was slightly lower than that of control subjects in all these abilities, but no significant differences were observed. It was concluded that allocentric and egocentric navigation deficits are detectable in patients with schizophrenia and bipolar disorder using VRNT, and this task along with RAVLT and ROCFT can be used as a valid clinical tool for distinguishing these patients from normal subjects.
Idiocentrism, allocentrism, psychological well being and suicidal ideation: a cross cultural study.
Zhang, Jie; Norvilitis, Jill M; Ingersoll, Travis Sky
2007-01-01
The present study examined the relationship between idiocentrism, allocentrism, psychological well being (self-esteem, depression, and social support), and suicidal ideation among 283 American college students and 343 Chinese college students. Idiocentrism was correlated with high self-esteem, high depression, and low social support, but the relationships were more likely to be significant for women than for men in both cultures. Allocentrism was primarily related to social support. As predicted, high levels of suicidal ideation were correlated with more idiocentrism, but only for women. Allocentrism was related to lower levels of suicidal ideation in both cultures, but the relationship was small. As suicide prevention may start from suicidal ideation treatment, the treatment of suicidal ideation may have to take into account cultural and personal characteristics, such as idiocentrism.
Belmonti, Vittorio; Cioni, Giovanni; Berthoz, Alain
2015-07-01
Navigational and reaching spaces are known to involve different cognitive strategies and brain networks, whose development in humans is still debated. In fact, high-level spatial processing, including allocentric location encoding, is already available to very young children, but navigational strategies are not mature until late childhood. The Magic Carpet (MC) is a new electronic device translating the traditional Corsi Block-tapping Test (CBT) to navigational space. In this study, the MC and the CBT were used to assess spatial memory for navigation and for reaching, respectively. Our hypothesis was that school-age children would not treat MC stimuli as navigational paths, assimilating them to reaching sequences. Ninety-one healthy children aged 6 to 11 years and 18 adults were enrolled. Overall short-term memory performance (span) on both tests, effects of sequence geometry, and error patterns according to a new classification were studied. Span increased with age on both tests, but relatively more in navigational than in reaching space, particularly in males. Sequence geometry specifically influenced navigation, not reaching. The number of body rotations along the path affected MC performance in children more than in adults, and in women more than in men. Error patterns indicated that navigational sequences were increasingly retained as global paths across development, in contrast to separately stored reaching locations. A sequence of spatial locations can be coded as a navigational path only if a cognitive switch from a reaching mode to a navigation mode occurs. This implies the integration of egocentric and allocentric reference frames, of visual and idiothetic cues, and access to long-term memory. This switch is not yet fulfilled at school age due to immature executive functions. © 2014 John Wiley & Sons Ltd.
Egocentric and Allocentric Localization During Induced Motion
Post, Robert B.; Welch, Robert B.; Whitney, David
2009-01-01
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect. PMID:18751688
Ramos, Juan M J
2008-03-18
In previous studies we have suggested that the dorsal hippocampus is involved in spatial consolidation by showing that rats with electrolytic hippocampal lesions exhibit a profound deficit in the retention of an allocentric task 24 days after the acquisition. However, in various hippocampal-dependent tasks, several studies have shown an overestimation of the behavioral deficit when electrolytic versus axon-sparing cytotoxic lesions has been used. For this reason, in this report we compare the effects on spatial retention of electrolytic and neurotoxic lesions to the dorsal hippocampus. Results showed a similar deficit in spatial retention in both groups 24 days after acquisition. Thus, the hippocampus proper and not fibers of passage or extrahippocampal damage is directly responsible for the deficit in spatial retention seen in rats with electrolytic lesions.
Pickavance, John; Azmoodeh, Arianne; Wilson, Andrew D
2018-06-01
The stability of coordinated rhythmic movement is primarily affected by the required mean relative phase. In general, symmetrical coordination is more stable than asymmetrical coordination; however, there are two ways to define relative phase and the associated symmetries. The first is in an egocentric frame of reference, with symmetry defined relative to the sagittal plane down the midline of the body. The second is in an allocentric frame of reference, with symmetry defined in terms of the relative direction of motion. Experiments designed to separate these constraints have shown that both egocentric and allocentric constraints contribute to overall coordination stability, with the former typically showing larger effects. However, separating these constraints has meant comparing movements made either in different planes of motion, or by limbs in different postures. In addition, allocentric information about the coordination is either in the form of the actual limb motion, or a transformed, Lissajous feedback display. These factors limit both the comparisons that can be made and the interpretations of these comparisons. The current study examined the effects of egocentric relative phase, allocentric relative phase, and allocentric feedback format on coordination stability in a single task. We found that while all three independently contributed to stability, the egocentric constraint dominated. This supports previous work. We examine the evidence underpinning theoretical explanations for the egocentric constraint, and describe how it may reflect the haptic perception of relative phase. Copyright © 2018 Elsevier B.V. All rights reserved.
Vorhees, Charles V; Schaefer, Tori L; Skelton, Matthew R; Grace, Curtis E; Herring, Nicole R; Williams, Michael T
2009-01-01
During postnatal days (PD) 11-20, (+/-)3,4-methylenedioxymethamphetamine (MDMA) treatment impairs egocentric and allocentric learning, and reduces spontaneous locomotor activity; however, it does not have these effects during PD 1-10. How the learning impairments relate to the stress hyporesponsive period (SHRP) is unknown. To test this association, the preweaning period was subdivided into 5-day periods from PD 1-20. Separate pups within each litter were injected subcutaneously with 0, 10, 15, 20, or 25 mg/kg MDMA x4/day on PD 1-5, 6-10, 11-15, or 16-20, and tested as adults. The 3 highest MDMA dose groups showed reduced locomotor activity during the first 10 min (of 60 min), especially in the PD 1-5 and 6-10 dosing regimens. MDMA groups in all dosing regimens showed impaired allocentric learning in the Morris water maze (on acquisition and reversal, all MDMA groups were affected; on the small platform phase, the 2 high-dose groups were affected). No effects of MDMA were found on anxiety (elevated zero maze), novel object recognition, or egocentric learning (although a nonsignificant trend was observed). The Morris maze results did not support the idea that the SHRP is critical to the effects of MDMA on allocentric learning. However, since no effects on egocentric learning were found, but were apparent after PD 11-20 treatment, the results show that these 2 forms of learning have different exposure-duration sensitivities. 2009 S. Karger AG, Basel.
The world is not flat: can people reorient using slope?
Nardi, Daniele; Newcombe, Nora S; Shipley, Thomas F
2011-03-01
Studies of spatial representation generally focus on flat environments and visual input. However, the world is not flat, and slopes are part of most natural environments. In a series of 4 experiments, we examined whether humans can use a slope as a source of allocentric, directional information for reorientation. A target was hidden in a corner of a square, featureless enclosure tilted at a 5° angle. Finding it required using the vestibular, kinesthetic, and visual cues associated with the slope gradient. In Experiment 1, the overall sample performed above chance, showing that slope is sufficient for reorientation in a real environment. However, a sex difference emerged; men outperformed women by 1.4 SDs because they were more likely to use a slope-based strategy. In Experiment 2, attention was drawn to the slope, and participants were prompted to rely on it to solve the task; however, men still outperformed women, indicating a greater ability to use slope. In Experiment 3, we excluded the possibility that women's disadvantage was due to wearing heeled footwear. In Experiment 4, women required more time than men to identify the uphill direction of the slope gradient; this suggests that, in a bottom-up fashion, a perceptual or attentional difficulty underlies women's disadvantage in the ability to use slope and their decreased reliance on this cue. Overall, a bi-coordinate representation was used to find the goal: The target was encoded primarily with respect to the vertical axis and secondarily with respect to the orthogonal axis of the slope. 2011 APA, all rights reserved
Nahum-Shani, Inbal; Somech, Anit
2015-01-01
We propose and test a framework which suggests that the relationships between leadership styles and Organizational Citizenship Behaviors (OCB) are contingent upon employee cultural-based individual differences. More specifically, we examine whether followers' idiocentrism and allocentrism moderate the relationship between transformational and transactional leadership and followers' OCB. Survey data, collected from a sample of school teachers and their principals from the Israeli kibbutzim and urban sectors, support our hypotheses. We found the relationship between transformational leadership and OCB to be positive to the extent that allocentrism increases, and negative to the extent that idiocentrism increases. We also found the relationship between transactional leadership and OCB to be positive to the extent that idiocentrism increases and negative to the extent that allocentrism increases. Implications of these findings for research and practice are discussed. PMID:26893538
Allocentric information is used for memory-guided reaching in depth: A virtual reality study.
Klinghammer, Mathias; Schütz, Immo; Blohm, Gunnar; Fiehler, Katja
2016-12-01
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bejanyan, Kathrine; Marshall, Tara C; Ferenczi, Nelli
2015-01-01
In collectivist cultures, families tend to be characterized by respect for parental authority and strong, interdependent ties. Do these aspects of collectivism exert countervailing pressures on mate choices and relationship quality? In the present research, we found that collectivism was associated with greater acceptance of parental influence over mate choice, thereby driving relationship commitment down (Studies 1 and 2), but collectivism was also associated with stronger family ties (referred to as family allocentrism), which drove commitment up (Study 2). Along similar lines, Study 1 found that collectivists' greater acceptance of parental influence on mate choice contributed to their reduced relationship passion, whereas Study 2 found that their greater family allocentrism may have enhanced their passion. Study 2 also revealed that collectivists may have reported a smaller discrepancy between their own preferences for mates high in warmth and trustworthiness and their perception of their parents' preferences for these qualities because of their stronger family allocentrism. However, their higher tolerance of parental influence may have also contributed to a smaller discrepancy in their mate preferences versus their perceptions of their parents' preferences for qualities signifying status and resources. Implications for the roles of collectivism, parental influence, and family allocentrism in relationship quality and mate selection will be discussed.
Bejanyan, Kathrine; Marshall, Tara C.; Ferenczi, Nelli
2015-01-01
In collectivist cultures, families tend to be characterized by respect for parental authority and strong, interdependent ties. Do these aspects of collectivism exert countervailing pressures on mate choices and relationship quality? In the present research, we found that collectivism was associated with greater acceptance of parental influence over mate choice, thereby driving relationship commitment down (Studies 1 and 2), but collectivism was also associated with stronger family ties (referred to as family allocentrism), which drove commitment up (Study 2). Along similar lines, Study 1 found that collectivists’ greater acceptance of parental influence on mate choice contributed to their reduced relationship passion, whereas Study 2 found that their greater family allocentrism may have enhanced their passion. Study 2 also revealed that collectivists may have reported a smaller discrepancy between their own preferences for mates high in warmth and trustworthiness and their perception of their parents’ preferences for these qualities because of their stronger family allocentrism. However, their higher tolerance of parental influence may have also contributed to a smaller discrepancy in their mate preferences versus their perceptions of their parents’ preferences for qualities signifying status and resources. Implications for the roles of collectivism, parental influence, and family allocentrism in relationship quality and mate selection will be discussed. PMID:25719563
Pure topographical disorientation in a patient with right occipito-temporal lesion.
Caglio, Marcella; Castelli, Lorys; Cerrato, Paolo; Latini-Corazzini, Luca
2011-01-01
We describe a patient who presented with a pure topographical disorientation after a stroke involving the right mesial occipito-temporal cortex. He could not point to external unseen landmarks or draw a map of his city, while he could recognize landmarks, and judge the distance, and describe the route between pairs of landmarks of the same city. He underwent standardized cognitive tests, and 6 tasks were used to assess a topographical orientation route-survey. This study provides evidence that topographical disorientation can be subdivided into very specific components. The results suggest that one of these components might refer to the processing of an allocentric map separable from the representation of route knowledge.
Gender comparisons in the private, collective, and allocentric selves.
Madson, L; Trafimow, D
2001-08-01
Researchers (e.g., M. B. Brewer & W. Gardner, 1996; H. C. Triandis, D. K. S. Chan, D. P. S. Bhawuk, S. Iwao, & J. P. B. Sinha, 1995) have suggested expansion of the standard model of individualism-collectivism to include people's close personal relationships in addition to their identification with in-groups. There has been considerable discussion of the hypothesis that women are more collective, interdependent, relational, and allocentric than men (e.g., S. E. Cross & L. Madson, 1997; Y. Kashima et al., 1995). In the present study, the authors used the Twenty Statements Test (M. H. Kuhn & T. McPartland, 1954) to examine gender differences in the self-concept by assessing the accessibility of private, collective, and allocentric self-cognitions. The U.S. women described themselves with more allocentric and more collective self-cognitions than did the U.S. men. Discussion focuses on the implications of those data for interpretation of other gender differences as well as for traditional models of individualism-collectivism.
Vainio, Lari; Mustonen, Terhi
2011-02-01
Brain-imaging research has shown that a viewed acting hand is mapped to the observer's hand representation that corresponds with the identity of the hand. In contrast, behavioral research has suggested that rather than representing a seen hand in relation to one's own manual system, it is represented in relation to the midline of an imaginary body. This view was drawn from the finding that indicated that the posture of the viewed hand determines how the hand facilitates responses. The present study explored how an identity of a viewed static hand facilitates responses by varying the onset time and the posture of the hand. The results were in line with the view that an observed hand can activate the observer's hand representation that corresponds with the identity of the hand. However, the posture of the hand did not influence these mapping processes. What mattered was the perspective (i.e., egocentric vs. allocentric) from which the hand was viewed. (c) 2010 APA, all rights reserved.
White, David J; Congedo, Marco; Ciorciari, Joseph; Silberstein, Richard B
2012-03-01
Brain oscillatory correlates of spatial navigation were investigated using blind source separation (BSS) and standardized low resolution electromagnetic tomography (sLORETA) analyses of 62-channel EEG recordings. Twenty-five participants were instructed to navigate to distinct landmark buildings in a previously learned virtual reality town environment. Data from periods of navigation between landmarks were subject to BSS analyses to obtain source components. Two of these cortical sources were found to exhibit significant spectral power differences during navigation with respect to a resting eyes open condition and were subject to source localization using sLORETA. These two sources were localized as a right parietal component with gamma activation and a right medial-temporal-parietal component with activation in theta and gamma bandwidths. The parietal gamma activity was thought to reflect visuospatial processing associated with the task. The medial-temporal-parietal activity was thought to be more specific to the navigational processing, representing the integration of ego- and allo-centric representations of space required for successful navigation, suggesting theta and gamma oscillations may have a role in integrating information from parietal and medial-temporal regions. Theta activity on this medial-temporal-parietal source was positively correlated with more efficient navigation performance. Results are discussed in light of the depth and proposed closed field structure of the hippocampus and potential implications for scalp EEG data. The findings of the present study suggest that appropriate BSS methods are ideally suited to minimizing the effects of volume conduction in noninvasive recordings, allowing more accurate exploration of deep brain processes.
Canessa, Nicola; Pantaleo, Giuseppe; Crespi, Chiara; Gorini, Alessandra; Cappa, Stefano F
2014-09-18
We used the "standard" and "switched" social contract versions of the Wason Selection-task to investigate the neural bases of human reasoning about social rules. Both these versions typically elicit the deontically correct answer, i.e. the proper identification of the violations of a conditional obligation. Only in the standard version of the task, however, this response corresponds to the logically correct one. We took advantage of this differential adherence to logical vs. deontical accuracy to test the different predictions of logic rule-based vs. visuospatial accounts of inferential abilities in 14 participants who solved the standard and switched versions of the Selection-task during functional-Magnetic-Resonance-Imaging. Both versions activated the well known left fronto-parietal network of deductive reasoning. The standard version additionally recruited the medial parietal and right inferior parietal cortex, previously associated with mental imagery and with the adoption of egocentric vs. allocentric spatial reference frames. These results suggest that visuospatial processes encoding one's own subjective experience in social interactions may support and shape the interpretation of deductive arguments and/or the resulting inferences, thus contributing to elicit content effects in human reasoning. Copyright © 2014 Elsevier B.V. All rights reserved.
Is "Object-Centred Neglect" a Homogeneous Entity?
ERIC Educational Resources Information Center
Gainotti, Guido; Ciaraffa, Francesca
2013-01-01
The nature of object-centred (allocentric) neglect and the possibility of dissociating it from egocentric (subject-centred) forms of neglect are controversial. Originally, allocentric neglect was described by and in patients who reproduced all the elements of a multi-object scene, but left unfinished the left side of one or more of them. More…
The medial prefrontal cortex and memory of cue location in the rat.
Rawson, Tim; O'Kane, Michael; Talk, Andrew
2010-01-01
We developed a single-trial cue-location memory task in which rats experienced an auditory cue while exploring an environment. They then recalled and avoided the sound origination point after the cue was paired with shock in a separate context. Subjects with medial prefrontal cortical (mPFC) lesions made no such avoidance response, but both lesioned and control subjects avoided the cue itself when presented at test. A follow up assessment revealed no spatial learning impairment in either group. These findings suggest that the rodent mPFC is required for incidental learning or recollection of the location at which a discrete cue occurred, but is not required for cue recognition or for allocentric spatial memory. Copyright 2009 Elsevier Inc. All rights reserved.
Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja
2017-01-01
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information. PMID:28450826
Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja
2017-01-01
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Idiocentrism-Allocentrism and Academics Self-Efficacy for Research in Beijing Universities
ERIC Educational Resources Information Center
Zhao, Jingsong; McCormick, John; Hoekman, Katherine
2008-01-01
Purpose: This article aims to explore how self-efficacy is related to academic research activities and how intra-culturally relevant factors may play a role in self-efficacy in the context of higher education in Beijing. In particular, relationships of self-efficacy for research with research productivity and idiocentrism-allocentrism are to be…
fMRI-activation during drawing a naturalistic or sketchy portrait.
Schaer, K; Jahn, G; Lotze, M
2012-07-15
Neural processes for naturalistic drawing might be discerned into object recognition and analysis, attention processes guiding eye hand interaction, encoding of visual features in an allocentric reference frame, a transfer into the motor command and precise motor guidance with tight sensorimotor feedback. Cerebral representations in a real life paradigm during naturalistic drawing have sparsely been investigated. Using a functional Magnetic Resonance Imaging (fMRI) paradigm we measured 20 naive subjects during drawing a portrait from a frontal face presented as a photograph. Participants were asked to draw the portrait in either a naturalistic or a sketchy characteristic way. Tracing the contours of the face with a pencil or passive viewing of the face served as control conditions. Compared to passive viewing, naturalistic and sketchy drawing recruited predominantly the dorsal visual pathway, somatosensory and motor areas and bilateral BA 44. The right occipital lobe, middle temporal (MT) and the fusiform face area were increasingly active during drawing compared to passive viewing as well. Compared to tracing with a pencil, both drawing tasks increasingly involved the bilateral precuneus together with the cuneus and right inferior temporal lobe. Overall, our study identified cerebral areas characteristic for previously proposed aspects of drawing: face perception and analysis (fusiform gyrus and higher visual areas), encoding and retrieval of locations in an allocentric reference frame (precuneus), and continuous feedback processes during motor output (parietal sulcus, cerebellar hemisphere). Copyright © 2012 Elsevier B.V. All rights reserved.
Sex differences in a human analogue of the Radial Arm Maze: the "17-Box Maze Test".
Rahman, Qazi; Abrahams, Sharon; Jussab, Fardin
2005-08-01
This study investigated sex differences in spatial memory using a human analogue of the Radial Arm Maze: a revision on the Nine Box Maze originally developed by called the 17-Box Maze Test herein. The task encourages allocentric spatial processing, dissociates object from spatial memory, and incorporates a within-participants design to provide measures of location and object, working and reference memory. Healthy adult males and females (26 per group) were administered the 17-Box Maze Test, as well as mental rotation and a verbal IQ test. Females made significantly fewer errors on this task than males. However, post hoc analysis revealed that the significant sex difference was specific to object, rather than location, memory measures. These were medium to large effect sizes. The findings raise the issue of task- and component-specific sexual dimorphism in cognitive mapping.
NASA Astrophysics Data System (ADS)
Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques
Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.
Lisofsky, Nina; Wiener, Jan; de Condappa, Olivier; Gallinat, Jürgen; Lindenberger, Ulman; Kühn, Simone
2016-10-01
Pregnancy is accompanied by prolonged exposure to high estrogen levels. Animal studies have shown that estrogen influences navigation strategies and, hence, affects navigation performance. High estrogen levels are related to increased use of hippocampal-based allocentric strategies and decreased use of striatal-based egocentric strategies. In humans, associations between hormonal shifts and navigation strategies are less well studied. This study compared 30 peripartal women (mean age 28years) to an age-matched control group on allocentric versus egocentric navigation performance (measured in the last month of pregnancy) and gray matter volume (measured within two months after delivery). None of the women had a previous pregnancy before study participation. Relative to controls, pregnant women performed less well in the egocentric condition of the navigation task, but not the allocentric condition. A whole-brain group comparison revealed smaller left striatal volume (putamen) in the peripartal women. Across the two groups, left striatal volume was associated with superior egocentric over allocentric performance. Limited by the cross-sectional study design, the findings are a first indication that human pregnancy might be accompanied by structural brain changes in navigation-related neural systems and concomitant changes in navigation strategy. Copyright © 2016 Elsevier Inc. All rights reserved.
Why do lesions in the rodent anterior thalamic nuclei cause such severe spatial deficits?
Aggleton, John P.; Nelson, Andrew J.D.
2015-01-01
Lesions of the rodent anterior thalamic nuclei cause severe deficits to multiple spatial learning tasks. Possible explanations for these effects are examined, with particular reference to T-maze alternation. Anterior thalamic lesions not only impair allocentric place learning but also disrupt other spatial processes, including direction learning, path integration, and relative length discriminations, as well as aspects of nonspatial learning, e.g., temporal discriminations. Working memory tasks, such as T-maze alternation, appear particularly sensitive as they combine an array of these spatial and nonspatial demands. This sensitivity partly reflects the different functions supported by individual anterior thalamic nuclei, though it is argued that anterior thalamic lesion effects also arise from covert pathology in sites distal to the thalamus, most critically in the retrosplenial cortex and hippocampus. This two-level account, involving both local and distal lesion effects, explains the range and severity of the spatial deficits following anterior thalamic lesions. These findings highlight how the anterior thalamic nuclei form a key component in a series of interdependent systems that support multiple spatial functions. PMID:25195980
Motor transfer from map ocular exploration to locomotion during spatial navigation from memory.
Demichelis, Alixia; Olivier, Gérard; Berthoz, Alain
2013-02-01
Spatial navigation from memory can rely on two different strategies: a mental simulation of a kinesthetic spatial navigation (egocentric route strategy) or visual-spatial memory using a mental map (allocentric survey strategy). We hypothesized that a previously performed "oculomotor navigation" on a map could be used by the brain to perform a locomotor memory task. Participants were instructed to (1) learn a path on a map through a sequence of vertical and horizontal eyes movements and (2) walk on the slabs of a "magic carpet" to recall this path. The main results showed that the anisotropy of ocular movements (horizontal ones being more efficient than vertical ones) influenced performances of participants when they change direction on the central slab of the magic carpet. These data suggest that, to find their way through locomotor space, subjects mentally repeated their past ocular exploration of the map, and this visuo-motor memory was used as a template for the locomotor performance.
Plancher, G; Tirard, A; Gyselinck, V; Nicolas, S; Piolino, P
2012-04-01
Most neuropsychological assessments of episodic memory bear little similarity to the events that patients actually experience as memories in daily life. The first aim of this study was to use a virtual environment to characterize episodic memory profiles in an ecological fashion, which includes memory for central and perceptual details, spatiotemporal contextual elements, and binding. This study included subjects from three different populations: healthy older adults, patients with amnestic mild cognitive impairment (aMCI) and patients with early to moderate Alzheimer's disease (AD). Second, we sought to determine whether environmental factors that can affect encoding (active vs. passive exploration) influence memory performance in pathological aging. Third, we benchmarked the results of our virtual reality episodic memory test against a classical memory test and a subjective daily memory complaint scale. Here, the participants were successively immersed in two virtual environments; the first, as the driver of a virtual car (active exploration) and the second, as the passenger of that car (passive exploration). Subjects were instructed to encode all elements of the environment as well as the associated spatiotemporal contexts. Following each immersion, we assessed the patient's recall and recognition of central information (i.e., the elements of the environment), contextual information (i.e., temporal, egocentric and allocentric spatial information) and lastly, the quality of binding. We found that the AD patients' performances were inferior to that of the aMCI and even more to that of the healthy aged groups, in line with the progression of hippocampal atrophy reported in the literature. Spatial allocentric memory assessments were found to be particularly useful for distinguishing aMCI patients from healthy older adults. Active exploration yielded enhanced recall of central and allocentric spatial information, as well as binding in all groups. This led aMCI patients to achieve better performance scores on immediate temporal memory tasks. Finally, the patients' daily memory complaints were more highly correlated with the performances on the virtual test than with their performances on the classical memory test. Taken together, these results highlight specific cognitive differences found between these three populations that may provide additional insight into the early diagnosis and rehabilitation of pathological aging. In particular, neuropsychological studies would benefit to use virtual tests and a multi-component approach to assess episodic memory, and encourage active encoding of information in patients suffering from mild or severe age-related memory impairment. The beneficial effect of active encoding on episodic memory in aMCI and early to moderate AD is discussed in the context of relatively preserved frontal and motor brain functions implicated in self-referential effects and procedural abilities. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.
Ratzlaff, Michael; Nawrot, Mark
2016-09-01
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.
Role of collective self-esteem on youth violence in a collective culture.
Lim, Lena L; Chang, Weining C
2009-02-01
Youth violence involvement has always been the focus of significant research attention. However, as most of the studies on youth violence have been conducted in Western cultures, little is known about the antecedents of violence in the Asian context. Researchers have suggested that collectivism might be the reason for the lower violent crime rates in Asia. Nevertheless, the present study proposes an alternative approach to the collectivistic orientation and violence relationship: The possibility that allocentrism (collectivist tendency at the individual difference level) might shape the meaning of and the attitudes towards violence; thus not all aspects of a collectivist culture serve as deterrents for violence. Instead of viewing it as a random individual act, violence in a collective cultural context could be seen, under certain circumstances, as a social obligation to one's in-group (especially when one's in-group is supportive of violence) and as an internalization of the norms and values of the culture. Thus, the present study investigates the relationship between allocentrism and its relation to violence in a highly collectivist Asian culture, Singapore. We further hypothesized that collective self-esteem might serve as the mediator between allocentrism and the values of violence. Using a sample of 149 incarcerated Singaporean male adolescents, results support the proposed theoretical model whereby collective self-esteem was found to mediate between allocentrism and the culture's norms and attitudes of violence, which eventually lead to physical violence behaviours.
The 5-HT7 receptor in learning and memory. Importance of the hippocampus
Roberts, Amanda J.; Hedlund, Peter B.
2011-01-01
The 5-HT7 receptor is a more recently discovered G-protein-coupled receptor for serotonin. The functions and possible clinical relevance of this receptor are not yet fully understood. The present paper reviews to what extent the use of animal models of learning and memory and other techniques have implicated the 5-HT7 receptor in such processes. The studies have used a combination of pharmacological and genetic tools targeting the receptor to evaluate effects on behavior and cellular mechanisms. In tests such as the Barnes maze, contextual fear conditioning and novel location recognition that involve spatial learning and memory there is a considerable amount of evidence supporting an involvement of the 5-HT7 receptor. Supporting evidence has also been obtained in studies of mRNA expression and cellular signaling as well as in electrophysiological experiments. Especially interesting are the subtle but distinct effects observed in hippocampus-dependent models of place learning where impairments have been described in mice lacking the 5-HT7 receptor or after administration of a selective antagonist. While more work is required, it appears that 5-HT7 receptors are particularly important in allocentric representation processes. In instrumental learning tasks both procognitive effects and impairments in memory have been observed using pharmacological tools targeting the 5-HT7 receptor. In conclusion, the use of pharmacological and genetic tools in animal studies of learning and memory suggest a potentially important role for the 5-HT7 receptor in cognitive processes. PMID:21484935
Serino, Silvia; Morganti, Francesca; Colombo, Desirée; Pedroli, Elisa; Cipresso, Pietro; Riva, Giuseppe
2018-06-01
A growing body of evidence pointed out that a decline in effectively using spatial reference frames for categorizing information occurs both in normal and pathological aging. Moreover, it is also known that executive deficits primarily characterize the cognitive profile of older individuals. Acknowledging this literature, the current study was aimed to specifically disentangle the contribution of the cognitive abilities related to the use of spatial reference frames to executive functioning in both healthy and pathological aging. 48 healthy elderly individuals and 52 elderly suffering from probable Alzheimer's Disease (AD) took part in the study. We exploited the potentiality of Virtual Reality to specifically measure the abilities in retrieving and syncing between different spatial reference frames, and then we administrated different neuropsychological tests for evaluating executive functions. Our results indicated that allocentric functions contributed significantly to the planning abilities, while syncing abilities influenced the attentional ones. The findings were discussed in terms of previous literature exploring relationships between cognitive deficits in the first phase of AD.
Kelly, Rachel; Mizelle, J C; Wheaton, Lewis A
2015-08-01
Prior work has demonstrated that perspective and handedness of observed actions can affect action understanding differently in right and left-handed persons, suggesting potential differences in the neural networks underlying action understanding between right and left-handed individuals. We sought to evaluate potential differences in these neural networks using electroencephalography (EEG). Right- and left-handed participants observed images of tool-use actions from egocentric and allocentric perspectives, with right- and left-handed actors performing the actions. Participants judged the outcome of the observed actions, and response accuracy and latency were recorded. Behaviorally, the highest accuracy and shortest latency was found in the egocentric perspective for right- and left-handed observers. Handedness of subject showed an effect on accuracy and latency also, where right-handed observers were faster to respond than left-handed observers, but on average were less accurate. Mu band (8-10 Hz) cortico-cortical coherence analysis indicated that right-handed observers have coherence in the motor dominant left parietal-premotor networks when looking at an egocentric right or allocentric left hands. When looking in an egocentric perspective at a left hand or allocentric right hand, coherence was lateralized to right parietal-premotor areas. In left-handed observers, bilateral parietal-premotor coherence patterns were observed regardless of actor handedness. These findings suggest that the cortical networks involved in understanding action outcomes are dependent on hand dominance, and notably right handed participants seem to utilize motor systems based on the limb seen performing the action. The decreased accuracy for right-handed participants on allocentric images could be due to asymmetrical lateralization of encoding action and motoric dominance, which may interfere with translating allocentric limb action outcomes. Further neurophysiological studies will determine the specific processes of how left- and right-handed participants understand actions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Towards a Normalised 3D Geovisualisation: The Viewpoint Management
NASA Astrophysics Data System (ADS)
Neuville, R.; Poux, F.; Hallot, P.; Billen, R.
2016-10-01
This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.
Liu, Qing-Ping; He, Wen-Wen; Ding, Hong; Nedelska, Zuzana; Hort, Jakub; Zhang, Bing; Xu, Yun
2016-01-01
Lacunar cerebral infarction (LI) is one of risk factors of vascular dementia and correlates with progression of cognitive impairment including the executive functions. However, little is known on spatial navigation impairment and its underlying microstructural alteration of white matter in patients with LI and with or without mild cognitive impairment (MCI). Our aim was to investigate whether the spatial navigation impairment correlated with the white matter integrity in LI patients with MCI (LI-MCI). Thirty patients with LI were included in the study and were divided into LI-MCI (n=17) and non MCI (LI-Non MCI) groups (n=13) according neuropsychological tests.The microstructural integrity of white matter was assessed by calculating a fractional anisotropy (FA) and mean diffusivity (MD) from diffusion tensor imaging (DTI) scans. The spatial navigation accuracy, separately evaluated as egocentric and allocentric, was assessed by a computerized human analogue of the Morris Water Maze tests Amunet. LI-MCI performed worse than the CN and LI-NonMCI groups on egocentric and delayed spatial navigation subtests. LI-MCI patients have spatial navigation deficits. The microstructural abnormalities in diffuse brain regions, including hippocampus, uncinate fasciculus and other brain regions may contribute to the spatial navigation impairment in LI-MCI patients at follow-up. PMID:27861154
Allocentrically implied target locations are updated in an eye-centred reference frame.
Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P
2012-04-18
When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium
NASA Astrophysics Data System (ADS)
Chastenay, Pierre
2016-02-01
An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an allocentric perspective (the view from space). While the dome creates a sense of immersion, the digital planetarium introduces a new way to teach astronomy, especially for topics that are inherently three-dimensional and where seeing the phenomenon from different points of view is essential. Like a virtual-reality environment, an immersive digital planetarium helps learners create a more scientifically accurate visualization of astronomical phenomena. In this study, a digital planetarium was used to teach the phases of the Moon to children aged 12 to 14. To fully grasp the lunar phases, one must imagine the spherical Moon (as perceived from space), revolving around the Earth while being illuminated by the Sun, and then reconcile this view with the geocentric perspective. Digital planetariums allow learners to have both an allocentric and a geocentric perspective on the lunar phases. Using a Design experiment approach, we tested an educational scenario in which the lunar phases were taught in an allocentric digital planetarium. Based on qualitative data collected before, during, and after the planetarium intervention, we were able to demonstrate that five out of six participants had a better understanding of the lunar phases after the planetarium session.
Vorhees, Charles V.; Williams, Michael T.
2016-01-01
Advantageous maneuvering through the environment to find food and avoid or escape danger is central to survival of most animal species. The ability to do so depends on learning and remembering different locations, especially home-base. This capacity is encoded in the brain by two systems: one using cues outside the organism (distal cues), allocentric navigation, and one using self-movement, internal cues (proximal cues), for egocentric navigation. Whereas allocentric navigation involves the hippocampus, entorhinal cortex, and surrounding structures, egocentric navigation involves the dorsal striatum and connected structures; in humans this system encodes routes and integrated paths and when over-learned, becomes procedural memory. Allocentric assessment methods have been extensively reviewed elsewhere. The purpose of this paper is to review one specific method for assessing egocentric, route-based navigation in rats: the Cincinnati Water Maze (CWM). The test is an asymmetric multiple-T maze arranged in such a way that rats must learn to find path openings along walls rather at ends in order to reach the goal. Failing to do this leads to cul-de-sacs and repeated errors. The task may be learned in the light or dark, but in the dark, wherein distal cues are eliminated, provides the best assessment of egocentric navigation. When used in conjunction with tests of other types of learning, such as allocentric navigation, the CWM provides a balanced approach to assessing the two major forms of navigational learning and memory found in mammals. PMID:27545092
Multiple reference frames in haptic spatial processing
NASA Astrophysics Data System (ADS)
Volčič, R.
2008-08-01
The present thesis focused on haptic spatial processing. In particular, our interest was directed to the perception of spatial relations with the main focus on the perception of orientation. To this end, we studied haptic perception in different tasks, either in isolation or in combination with vision. The parallelity task, where participants have to match the orientations of two spatially separated bars, was used in its two-dimensional and three-dimensional versions in Chapter 2 and Chapter 3, respectively. The influence of non-informative vision and visual interference on performance in the parallelity task was studied in Chapter 4. A different task, the mental rotation task, was introduced in a purely haptic study in Chapter 5 and in a visuo-haptic cross-modal study in Chapter 6. The interaction of multiple reference frames and their influence on haptic spatial processing were the common denominators of these studies. In this thesis we approached the problems of which reference frames play the major role in haptic spatial processing and how the relative roles of distinct reference frames change depending on the available information and the constraints imposed by different tasks. We found that the influence of a reference frame centered on the hand was the major cause of the deviations from veridicality observed in both the two-dimensional and three-dimensional studies. The results were described by a weighted average model, in which the hand-centered egocentric reference frame is supposed to have a biasing influence on the allocentric reference frame. Performance in haptic spatial processing has been shown to depend also on sources of information or processing that are not strictly connected to the task at hand. When non-informative vision was provided, a beneficial effect was observed in the haptic performance. This improvement was interpreted as a shift from the egocentric to the allocentric reference frame. Moreover, interfering visual information presented in the vicinity of the haptic stimuli parametrically modulated the magnitude of the deviations. The influence of the hand-centered reference frame was shown also in the haptic mental rotation task where participants were quicker in judging the parity of objects when these were aligned with respect to the hands than when they were physically aligned. Similarly, in the visuo-haptic cross-modal mental rotation task the parity judgments were influenced by the orientation of the exploring hand with respect to the viewing direction. This effect was shown to be modulated also by an intervening temporal delay that supposedly counteracts the influence of the hand-centered reference frame. We suggest that the hand-centered reference frame is embedded in a hierarchical structure of reference frames where some of these emerge depending on the demands and the circumstances of the surrounding environment and the needs of an active perceiver.
Data Representations for Geographic Information Systems.
ERIC Educational Resources Information Center
Shaffer, Clifford A.
1992-01-01
Surveys the field and literature of geographic information systems (GIS) and spatial data representation as it relates to GIS. Highlights include GIS terms, data types, and operations; vector representations and raster, or grid, representations; spatial indexing; elevation data representations; large spatial databases; and problem areas and future…
Prism adaptation improves ego-centered but not allocentric neglect in early rehabilitation patients.
Gossmann, Anja; Kastrup, Andreas; Kerkhoff, Georg; López-Herrero, Carmen; Hildebrandt, Helmut
2013-01-01
Unilateral neglect due to parieto-temporo-frontal lesions has a negative impact on the success of rehabilitation, and prism adaptation (PA) enhances recovery from neglect. However, it is unclear if this effect holds also in severely impaired patients and/or in the postacute phase of rehabilitation. Moreover, it is not known whether PA affects all aspects of neglect recovery or ego-centered spatial orientation only. Sixteen patients in a postacute stage (on average 36 days after a large right cerebrovascular stroke) were entered into a series of single case design studies with 4 measurements: 2 before and 2 after 1 week of PA treatment. All patients had severe neglect (showing trunk, head, and eye deviation; canceling less than 20% of targets in a visual cancellation test). Lesions were transferred to a standard brain to analyze size and location. Patients improved in cued body orientation and in the cancellation task, that is, in ego-centered neglect. However, none of the measures used to evaluate neglect of left side of objects irrespective of their position on the right or left side of the patient (allocentric neglect) showed an improvement. Treatment effects were not influenced by total lesion size, but lesions including the postcentral cortex were related to smaller recovery gains. PA is helpful in treating severely impaired patients in the postacute phase, but the effect is restricted to ego-centered neglect. Lesions in the postcentral cortex (middle occipito-temporal, middle temporal, and posterior parietal areas) seem to limit the effect of PA.
The Ability of Young Korean Children to Use Spatial Representations
ERIC Educational Resources Information Center
Kim, Minsung; Bednarz, Robert; Kim, Jaeyil
2012-01-01
The National Research Council emphasizes using tools of representation as an essential element of spatial thinking. However, it is debatable at what age the use of spatial representation for spatial thinking skills should begin. This study investigated whether young Korean children possess the potential to understand map-like representation using…
Aging specifically impairs switching to an allocentric navigational strategy
Harris, Mathew A.; Wiener, Jan M.; Wolbers, Thomas
2012-01-01
Navigation abilities decline with age, partly due to deficits in numerous component processes. Impaired switching between these various processes (i.e., switching navigational strategies) is also likely to contribute to age-related navigational impairments. We tested young and old participants on a virtual plus maze task (VPM), expecting older participants to exhibit a specific strategy switching deficit, despite unimpaired learning of allocentric (place) and egocentric (response) strategies following reversals within each strategy. Our initial results suggested that older participants performed worse during place trial blocks but not response trial blocks, as well as in trial blocks following a strategy switch but not those following a reversal. However, we then separated trial blocks by both strategy and change type, revealing that these initial results were due to a more specific deficit in switching to the place strategy. Place reversals and switches to response, as well as response reversals, were unaffected. We argue that this specific “switch-to-place” deficit could account for apparent impairments in both navigational strategy switching and allocentric processing and contributes more generally to age-related decline in navigation. PMID:23125833
Aging specifically impairs switching to an allocentric navigational strategy.
Harris, Mathew A; Wiener, Jan M; Wolbers, Thomas
2012-01-01
Navigation abilities decline with age, partly due to deficits in numerous component processes. Impaired switching between these various processes (i.e., switching navigational strategies) is also likely to contribute to age-related navigational impairments. We tested young and old participants on a virtual plus maze task (VPM), expecting older participants to exhibit a specific strategy switching deficit, despite unimpaired learning of allocentric (place) and egocentric (response) strategies following reversals within each strategy. Our initial results suggested that older participants performed worse during place trial blocks but not response trial blocks, as well as in trial blocks following a strategy switch but not those following a reversal. However, we then separated trial blocks by both strategy and change type, revealing that these initial results were due to a more specific deficit in switching to the place strategy. Place reversals and switches to response, as well as response reversals, were unaffected. We argue that this specific "switch-to-place" deficit could account for apparent impairments in both navigational strategy switching and allocentric processing and contributes more generally to age-related decline in navigation.
Language supports young children’s use of spatial relations to remember locations
Miller, Hilary E.; Patterson, Rebecca; Simmering, Vanessa R.
2016-01-01
Two experiments investigated the role of language in children’s spatial recall performance. In particular, we assessed whether selecting an intrinsic reference frame could be improved through verbal encoding. Selecting an intrinsic reference frame requires remembering locations relative to nearby objects independent of one’s body (egocentric) or distal environmental (allocentric) cues, and does not reliably occur in children under 5 years of age (Nardini, Burgess, Breckenridge, & Atkinson, 2006). The current studies tested the relation between spatial language and 4-year-olds’ selection of an intrinsic reference frame in spatial recall. Experiment 1 showed that providing 4-year-olds with location-descriptive cues during (Exp. 1a) or before (Exp. 1b) the recall task improved performance both overall and specifically on trials relying most on an intrinsic reference frame. Additionally, children’s recall performance was predicted by their verbal descriptions of the task space (Exp. 1a control condition). Non-verbally highlighting relations among objects during the recall task (Exp. 2) supported children’s performance relative to the control condition, but significantly less than the location-descriptive cues. These results suggest that the ability to verbally represent relations is a potential mechanism that could account for developmental changes in the selection of an intrinsic reference frame during spatial recall. PMID:26896902
Language supports young children's use of spatial relations to remember locations.
Miller, Hilary E; Patterson, Rebecca; Simmering, Vanessa R
2016-05-01
Two experiments investigated the role of language in children's spatial recall performance. In particular, we assessed whether selecting an intrinsic reference frame could be improved through verbal encoding. Selecting an intrinsic reference frame requires remembering locations relative to nearby objects independent of one's body (egocentric) or distal environmental (allocentric) cues, and does not reliably occur in children under 5 years of age (Nardini, Burgess, Breckenridge, & Atkinson, 2006). The current studies tested the relation between spatial language and 4-year-olds' selection of an intrinsic reference frame in spatial recall. Experiment 1 showed that providing 4-year-olds with location-descriptive cues during (Exp. 1a) or before (Exp. 1b) the recall task improved performance both overall and specifically on trials relying most on an intrinsic reference frame. Additionally, children's recall performance was predicted by their verbal descriptions of the task space (Exp. 1a control condition). Non-verbally highlighting relations among objects during the recall task (Exp. 2) supported children's performance relative to the control condition, but significantly less than the location-descriptive cues. These results suggest that the ability to verbally represent relations is a potential mechanism that could account for developmental changes in the selection of an intrinsic reference frame during spatial recall. Copyright © 2016 Elsevier B.V. All rights reserved.
Effects of visual information regarding allocentric processing in haptic parallelity matching.
Van Mier, Hanneke I
2013-10-01
Research has revealed that haptic perception of parallelity deviates from physical reality. Large and systematic deviations have been found in haptic parallelity matching most likely due to the influence of the hand-centered egocentric reference frame. Providing information that increases the influence of allocentric processing has been shown to improve performance on haptic matching. In this study allocentric processing was stimulated by providing informative vision in haptic matching tasks that were performed using hand- and arm-centered reference frames. Twenty blindfolded participants (ten men, ten women) explored the orientation of a reference bar with the non-dominant hand and subsequently matched (task HP) or mirrored (task HM) its orientation on a test bar with the dominant hand. Visual information was provided by means of informative vision with participants having full view of the test bar, while the reference bar was blocked from their view (task VHP). To decrease the egocentric bias of the hands, participants also performed a visual haptic parallelity drawing task (task VHPD) using an arm-centered reference frame, by drawing the orientation of the reference bar. In all tasks, the distance between and orientation of the bars were manipulated. A significant effect of task was found; performance improved from task HP, to VHP to VHPD, and HM. Significant effects of distance were found in the first three tasks, whereas orientation and gender effects were only significant in tasks HP and VHP. The results showed that stimulating allocentric processing by means of informative vision and reducing the egocentric bias by using an arm-centered reference frame led to most accurate performance on parallelity matching. © 2013 Elsevier B.V. All rights reserved.
Cesa, Gian Luca; Bacchetta, Monica; Castelnuovo, Gianluca; Conti, Sara; Gaggioli, Andrea; Mantovani, Fabrizia; Molinari, Enrico; Cárdenas-López, Georgina; Riva, Giuseppe
2013-01-01
Background Recent research identifies unhealthful weight-control behaviors (fasting, vomiting, or laxative abuse) induced by a negative experience of the body, as the common antecedents of both obesity and eating disorders. In particular, according to the allocentric lock hypothesis, individuals with obesity may be locked to an allocentric (observer view) negative memory of the body that is no longer updated by contrasting egocentric representations driven by perception. In other words, these patients may be locked to an allocentric negative representation of their body that their sensory inputs are no longer able to update even after a demanding diet and a significant weight loss. Objective To test the brief and long-term clinical efficacy of an enhanced cognitive-behavioral therapy including a virtual reality protocol aimed at unlocking the negative memory of the body (ECT) in morbidly obese patients with binge eating disorders (BED) compared with standard cognitive behavior therapy (CBT) and an inpatient multimodal treatment (IP) on weight loss, weight loss maintenance, BED remission, and body satisfaction improvement, including psychonutritional groups, a low-calorie diet (1200 kcal/day), and physical training. Methods 90 obese (BMI>40) female patients with BED upon referral to an obesity rehabilitation center were randomly assigned to conditions (31 to ECT, 30 to CBT, and 29 to IP). Before treatment completion, 24 patients discharged themselves from hospital (4 in ECT, 10 in CBT, and 10 in IP). The remaining 66 inpatients received either 15 sessions of ECT, 15 sessions of CBT, or no additional treatment over a 5-week usual care inpatient regimen (IP). ECT and CBT treatments were administered by 3 licensed psychotherapists, and patients were blinded to conditions. At start, upon completion of the inpatient treatment, and at 1-year follow-up, patients' weight, number of binge eating episodes during the previous month, and body satisfaction were assessed by self-report questionnaires and compared across conditions. 22 patients who received all sessions did not provide follow-up data (9 in ECT, 6 in CBT, and 7 in IP). Results Only ECT was effective at improving weight loss at 1-year follow-up. Conversely, control participants regained on average most of the weight they had lost during the inpatient program. Binge eating episodes decreased to zero during the inpatient program but were reported again in all the three groups at 1-year follow-up. However, a substantial regain was observed only in the group who received the inpatient program alone, while both ECT and CBT were successful in maintaining a low rate of monthly binge eating episodes. Conclusions Despite study limitations, findings support the hypothesis that the integration of a VR-based treatment, aimed at both unlocking the negative memory of the body and at modifying its behavioral and emotional correlates, may improve the long-term outcome of a treatment for obese BED patients. As expected, the VR-based treatment, in comparison with the standard CBT approach, was able to better prevent weight regain but not to better manage binge eating episodes. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 59019572; http://www.controlled-trials.com/ISRCTN59019572 (Archived by WebCite at http://www.webcitation.org/6GxHxAR2G) PMID:23759286
Exploring the Structure of Spatial Representations
Madl, Tamas; Franklin, Stan; Chen, Ke; Trappl, Robert; Montaldi, Daniela
2016-01-01
It has been suggested that the map-like representations that support human spatial memory are fragmented into sub-maps with local reference frames, rather than being unitary and global. However, the principles underlying the structure of these ‘cognitive maps’ are not well understood. We propose that the structure of the representations of navigation space arises from clustering within individual psychological spaces, i.e. from a process that groups together objects that are close in these spaces. Building on the ideas of representational geometry and similarity-based representations in cognitive science, we formulate methods for learning dissimilarity functions (metrics) characterizing participants’ psychological spaces. We show that these learned metrics, together with a probabilistic model of clustering based on the Bayesian cognition paradigm, allow prediction of participants’ cognitive map structures in advance. Apart from insights into spatial representation learning in human cognition, these methods could facilitate novel computational tools capable of using human-like spatial concepts. We also compare several features influencing spatial memory structure, including spatial distance, visual similarity and functional similarity, and report strong correlations between these dimensions and the grouping probability in participants’ spatial representations, providing further support for clustering in spatial memory. PMID:27347681
Teacher spatial skills are linked to differences in geometry instruction.
Otumfuor, Beryl Ann; Carr, Martha
2017-12-01
Spatial skills have been linked to better performance in mathematics. The purpose of this study was to examine the relationship between teacher spatial skills and their instruction, including teacher content and pedagogical knowledge, use of pictorial representations, and use of gestures during geometry instruction. Fifty-six middle school teachers participated in the study. The teachers were administered spatial measures of mental rotations and spatial visualization. Next, a single geometry class was videotaped. Correlational analyses revealed that spatial skills significantly correlate with teacher's use of representational gestures and content and pedagogical knowledge during instruction of geometry. Spatial skills did not independently correlate with the use of pointing gestures or the use of pictorial representations. However, an interaction term between spatial skills and content and pedagogical knowledge did correlate significantly with the use of pictorial representations. Teacher experience as measured by the number of years of teaching and highest degree did not appear to affect the relationships among the variables with the exception of the relationship between spatial skills and teacher content and pedagogical knowledge. Teachers with better spatial skills are also likely to use representational gestures and to show better content and pedagogical knowledge during instruction. Spatial skills predict pictorial representation use only as a function of content and pedagogical knowledge. © 2017 The British Psychological Society.
Developmental gender differences in children in a virtual spatial memory task.
León, Irene; Cimadevilla, José Manuel; Tascón, Laura
2014-07-01
Behavioral achievements are the product of brain maturation. During postnatal development, the medial temporal lobe completes its maturation, and children acquire new memory abilities. In recent years, virtual reality-based tasks have been introduced in the neuropsychology field to assess different cognitive functions. In this work, desktop virtual reality tasks are combined with classic psychometric tests to assess spatial abilities in 4- to 10-year-old children. Fifty boys and 50 girls 4-10-years of age participated in this study. Spatial reference memory and spatial working memory were assessed using a desktop virtual reality-based task. Other classic psychometric tests were also included in this work (e.g., the Corsi Block Tapping Test, digit tests, 10/36 Spatial Recall Test). In general terms, 4- and 5-year-old groups showed poorer performance than the older groups. However, 5-year-old children showed basic spatial navigation abilities with little difficulty. In addition, boys outperformed girls from the 6-8-year-old groups. Gender differences only emerged in the reference-memory version of the spatial task, whereas both sexes displayed similar performances in the working-memory version. There was general improvement in the performance of different tasks in children older than 5 years. However, results also suggest that brain regions involved in allocentric memory are functional even at the age of 5. In addition, the brain structures underlying reference memory mature later in girls than those required for the working memory.
Wilkins, Leanne K; Girard, Todd A; Herdman, Katherine A; Christensen, Bruce K; King, Jelena; Kiang, Michael; Bohbot, Veronique D
2017-10-30
Different strategies may be spontaneously adopted to solve most navigation tasks. These strategies are associated with dissociable brain systems. Here, we use brain-imaging and cognitive tasks to test the hypothesis that individuals living with Schizophrenia Spectrum Disorders (SSD) have selective impairment using a hippocampal-dependent spatial navigation strategy. Brain activation and memory performance were examined using functional magnetic resonance imaging (fMRI) during the 4-on-8 virtual maze (4/8VM) task, a human analog of the rodent radial-arm maze that is amenable to both response-based (egocentric or landmark-based) and spatial (allocentric, cognitive mapping) strategies to remember and navigate to target objects. SSD (schizophrenia and schizoaffective disorder) participants who adopted a spatial strategy performed more poorly on the 4/8VM task and had less hippocampal activation than healthy comparison participants using either strategy as well as SSD participants using a response strategy. This study highlights the importance of strategy use in relation to spatial cognitive functioning in SSD. Consistent with a selective-hippocampal dependent deficit in SSD, these results support the further development of protocols to train impaired hippocampal-dependent abilities or harness non-hippocampal dependent intact abilities. Copyright © 2017 Elsevier B.V. All rights reserved.
The Impact of the Brain-Derived Neurotrophic Factor Gene on Trauma and Spatial Processing.
Miller, Jessica K; McDougall, Siné; Thomas, Sarah; Wiener, Jan
2017-11-27
The influence of genes and the environment on the development of Post-Traumatic Stress Disorder (PTSD) continues to motivate neuropsychological research, with one consistent focus being the Brain-Derived Neurotrophic Factor (BDNF) gene, given its impact on the integrity of the hippocampal memory system. Research into human navigation also considers the BDNF gene in relation to hippocampal dependent spatial processing. This speculative paper brings together trauma and spatial processing for the first time and presents exploratory research into their interactions with BDNF. We propose that quantifying the impact of BDNF on trauma and spatial processing is critical and may well explain individual differences in clinical trauma treatment outcomes and in navigation performance. Research has already shown that the BDNF gene influences PTSD severity and prevalence as well as navigation behaviour. However, more data are required to demonstrate the precise hippocampal dependent processing mechanisms behind these influences in different populations and environmental conditions. This paper provides insight from recent studies and calls for further research into the relationship between allocentric processing, trauma processing and BDNF. We argue that research into these neural mechanisms could transform PTSD clinical practice and professional support for individuals in trauma-exposing occupations such as emergency response, law enforcement and the military.
Sex differences in a virtual water maze: an eye tracking and pupillometry study.
Mueller, Sven C; Jackson, Carl P T; Skelton, Ron W
2008-11-21
Sex differences in human spatial navigation are well known. However, the exact strategies that males and females employ in order to navigate successfully around the environment are unclear. While some researchers propose that males prefer environment-centred (allocentric) and females prefer self-centred (egocentric) navigation, these findings have proved difficult to replicate. In the present study we examined eye movements and physiological measures of memory (pupillometry) in order to compare visual scanning of spatial orientation using a human virtual analogue of the Morris Water Maze task. Twelve women and twelve men (average age=24 years) were trained on a visible platform and had to locate an invisible platform over a series of trials. On all but the first trial, participants' eye movements were recorded for 3s and they were asked to orient themselves in the environment. While the behavioural data replicated previous findings of improved spatial performance for males relative to females, distinct sex differences in eye movements were found. Males tended to explore consistently more space early on while females demonstrated initially longer fixation durations and increases in pupil diameter usually associated with memory processing. The eye movement data provides novel insight into differences in navigational strategies between the sexes.
Aggleton, John P; Poirier, Guillaume L; Aggleton, Hugh S; Vann, Seralynne D; Pearce, John M
2009-06-01
The present study used 2 different discrimination tasks designed to isolate distinct components of visuospatial learning: structural learning and geometric learning. Structural learning refers to the ability to learn the precise combination of stimulus identity with stimulus location. Rats with anterior thalamic lesions and fornix lesions were unimpaired on a configural learning task in which the rats learned 3 concurrent mirror-image discriminations (structural learning). Indeed, both lesions led to facilitated learning. In contrast, anterior thalamic lesions impaired the geometric discrimination (e.g., swim to the corner with the short wall to the right of the long wall). Finally, both the fornix and anterior thalamic lesions severely impaired T-maze alternation, a task that taxes an array of spatial strategies including allocentric learning. This pattern of dissociations and double dissociations highlights how distinct classes of spatial learning rely on different systems, even though they may converge on the hippocampus. Consequently, the findings suggest that structural learning is heavily dependent on cortico-hippocampal interactions. In contrast, subcortical inputs (such as those from the anterior thalamus) contribute to geometric learning. Copyright (c) 2009 APA, all rights reserved.
Representation Elements of Spatial Thinking
NASA Astrophysics Data System (ADS)
Fiantika, F. R.
2017-04-01
This paper aims to add a reference in revealing spatial thinking. There several definitions of spatial thinking but it is not easy to defining it. We can start to discuss the concept, its basic a forming representation. Initially, the five sense catch the natural phenomenon and forward it to memory for processing. Abstraction plays a role in processing information into a concept. There are two types of representation, namely internal representation and external representation. The internal representation is also known as mental representation; this representation is in the human mind. The external representation may include images, auditory and kinesthetic which can be used to describe, explain and communicate the structure, operation, the function of the object as well as relationships. There are two main elements, representations properties and object relationships. These elements play a role in forming a representation.
Han, Yu-Hsuan; Pai, Ming-Chyi; Hong, Chi-Tzong
2011-02-01
The neurological basis for topographical disorientation has recently shifted from a model of navigation utilizing egocentric techniques alone, to multiple parallel systems of topographical cognition including egocentric and allocentric strategies. We explored if this hypothesis may be applicable to a patient with late-onset blindness. A 72-year-old male with bilateral blindness experienced a sudden inability to navigate after suffering a stroke. Multiple lesions scattered bilaterally throughout the parietal-occipital lobes were found. Deficits in the neural correlates underlying egocentric or allocentric strategies may result in topographical disorientation, even if one appears to be the predominant orientation strategy utilized. Copyright © 2010 Elsevier Ltd. All rights reserved.
Sereno, Anne B.; Lehky, Sidney R.
2011-01-01
Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”). PMID:21344010
Squeezing, Striking, and Vocalizing: Is Number Representation Fundamentally Spatial?
ERIC Educational Resources Information Center
Nunez, Rafael; Doan, D.; Nikoulina, Anastasia
2011-01-01
Numbers are fundamental entities in mathematics, but their cognitive bases are unclear. Abundant research points to linear space as a natural grounding for number representation. But, is number representation fundamentally spatial? We disentangle number representation from standard number-to-line reporting methods, and compare numerical…
Barra, Julien; Laou, Laetitia; Poline, Jean-Baptiste; Lebihan, Denis; Berthoz, Alain
2012-01-01
Perspective (route or survey) during the encoding of spatial information can influence recall and navigation performance. In our experiment we investigated a third type of perspective, which is a slanted view. This slanted perspective is a compromise between route and survey perspectives, offering both information about landmarks as in route perspective and geometric information as in survey perspective. We hypothesized that the use of slanted perspective would allow the brain to use either egocentric or allocentric strategies during storage and recall. Twenty-six subjects were scanned (3-Tesla fMRI) during the encoding of a path (40-s navigation movie within a virtual city). They were given the task of encoding a segment of travel in the virtual city and of subsequent shortcut-finding for each perspective: route, slanted and survey. The analysis of the behavioral data revealed that perspective influenced response accuracy, with significantly more correct responses for slanted and survey perspectives than for route perspective. Comparisons of brain activation with route, slanted, and survey perspectives suggested that slanted and survey perspectives share common brain activity in the left lingual and fusiform gyri and lead to very similar behavioral performance. Slanted perspective was also associated with similar activation to route perspective during encoding in the right middle occipital gyrus. Furthermore, slanted perspective induced intermediate patterns of activation (in between route and survey) in some brain areas, such as the right lingual and fusiform gyri. Our results suggest that the slanted perspective may be considered as a hybrid perspective. This result offers the first empirical support for the choice to present the slanted perspective in many navigational aids. PMID:23209583
Krause, Florian; Lindemann, Oliver; Toni, Ivan; Bekkering, Harold
2014-04-01
A dominant hypothesis on how the brain processes numerical size proposes a spatial representation of numbers as positions on a "mental number line." An alternative hypothesis considers numbers as elements of a generalized representation of sensorimotor-related magnitude, which is not obligatorily spatial. Here we show that individuals' relative use of spatial and nonspatial representations has a cerebral counterpart in the structural organization of the posterior parietal cortex. Interindividual variability in the linkage between numbers and spatial responses (faster left responses to small numbers and right responses to large numbers; spatial-numerical association of response codes effect) correlated with variations in gray matter volume around the right precuneus. Conversely, differences in the disposition to link numbers to force production (faster soft responses to small numbers and hard responses to large numbers) were related to gray matter volume in the left angular gyrus. This finding suggests that numerical cognition relies on multiple mental representations of analogue magnitude using different neural implementations that are linked to individual traits.
Specific to Whose Body? Perspective-Taking and the Spatial Mapping of Valence
Kominsky, Jonathan F.; Casasanto, Daniel
2013-01-01
People tend to associate the abstract concepts of “good” and “bad” with their fluent and disfluent sides of space, as determined by their natural handedness or by experimental manipulation (Casasanto, 2011). Here we investigated influences of spatial perspective taking on the spatialization of “good” and “bad.” In the first experiment, participants indicated where a schematically drawn cartoon character would locate “good” and “bad” stimuli. Right-handers tended to assign “good” to the right and “bad” to the left side of egocentric space when the character shared their spatial perspective, but when the character was rotated 180° this spatial mapping was reversed: good was assigned to the character’s right side, not the participant’s. The tendency to spatialize valence from the character’s perspective was stronger in the second experiment, when participants were shown a full-featured photograph of the character. In a third experiment, most participants not only spatialized “good” and “bad” from the character’s perspective, they also based their judgments on a salient attribute of the character’s body (an injured hand) rather than their own body. Taking another’s spatial perspective encourages people to compute space-valence mappings using an allocentric frame of reference, based on the fluency with which the other person could perform motor actions with their right or left hand. When people reason from their own spatial perspective, their judgments depend, in part, on the specifics of their bodies; when people reason from someone else’s perspective, their judgments may depend on the specifics of the other person’s body, instead. PMID:23717296
Reference frames in virtual spatial navigation are viewpoint dependent
Török, Ágoston; Nguyen, T. Peter; Kolozsvári, Orsolya; Buchanan, Robert J.; Nadasdy, Zoltan
2014-01-01
Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory. PMID:25249956
Reference frames in virtual spatial navigation are viewpoint dependent.
Török, Agoston; Nguyen, T Peter; Kolozsvári, Orsolya; Buchanan, Robert J; Nadasdy, Zoltan
2014-01-01
Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory.
Spatial Representation by Blind and Sighted Children
ERIC Educational Resources Information Center
Millar, Susanna
1976-01-01
Problem studied: How children represent haptic spatial information in memory. Question aimed at: Whether, and if so in what ways, children's spatial representations differ according to the main modality of prior experience. (JH)
NASA Astrophysics Data System (ADS)
Fan, Jiayuan; Tan, Hui Li; Toomik, Maria; Lu, Shijian
2016-10-01
Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.
The role of the right superior temporal gyrus in stimulus-centered spatial processing.
Shah-Basak, Priyanka P; Chen, Peii; Caulfield, Kevin; Medina, Jared; Hamilton, Roy H
2018-05-01
Although emerging neuropsychological evidence supports the involvement of temporal areas, and in particular the right superior temporal gyrus (STG), in allocentric neglect deficits, the role of STG in healthy spatial processing remains elusive. While several functional brain imaging studies have demonstrated involvement of the STG in tasks involving explicit stimulus-centered judgments, prior rTMS studies targeting the right STG did not find the expected neglect-like rightward bias in size judgments using the conventional landmark task. The objective of the current study was to investigate whether disruption of the right STG using inhibitory repetitive transcranial magnetic stimulation (rTMS) could impact stimulus-centered, allocentric spatial processing in healthy individuals. A lateralized version of the landmark task was developed to accentuate the dissociation between viewer-centered and stimulus-centered reference frames. We predicted that inhibiting activity in the right STG would decrease accuracy because of induced rightward bias centered on the line stimulus irrespective of its viewer-centered or egocentric locations. Eleven healthy, right-handed adults underwent the lateralized landmark task. After viewing each stimulus, participants had to judge whether the line was bisected, or whether the left (left-long trials) or the right segment (right-long trials) of the line was longer. Participants repeated the task before (pre-rTMS) and after (post-rTMS) receiving 20 min of 1 Hz rTMS over the right STG, the right supramarginal gyrus (SMG), and the vertex (a control site) during three separate visits. Linear mixed models for binomial data were generated with either accuracy or judgment errors as dependent variables, to compare 1) performance across trial types (bisection, non-bisection), and 2) pre- vs. post-rTMS performance between the vertex and the STG and the vertex and the SMG. Line eccentricity (z = 4.31, p < 0.0001) and line bisection (z = 5.49, p < 0.0001) were significant predictors of accuracy. In the models comparing the effects of rTMS, a significant two-way interaction with STG (z = -3.09, p = 0.002) revealed a decrease in accuracy of 9.5% and an increase in errors of the right-long type by 10.7% on bisection trials, in both left and right viewer-centered locations. No significant changes in leftward errors were found. These findings suggested an induced stimulus-centered rightward bias in our participants after STG stimulation. Notably, accuracy or errors were not influenced by SMG stimulation compared to vertex. In line with our predictions, the findings provide compelling evidence for right STG's involvement in healthy stimulus-centered spatial processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kaiser, Mary Kister; Remington, Roger
1988-01-01
Spatial cognition is the ability to reason about geometric relationships in the real (or a metaphorical) world based on one or more internal representations of those relationships. The study of spatial cognition is concerned with the representation of spatial knowledge, and our ability to manipulate these representations to solve spatial problems. Spatial cognition is utilized most critically when direct perceptual cues are absent or impoverished. Examples are provided of how human spatial cognitive abilities impact on three areas of space station operator performance: orientation, path planning, and data base management. A videotape provides demonstrations of relevant phenomena (e.g., the importance of orientation for recognition of complex, configural forms). The presentation is represented by abstract and overhead visuals only.
Chareyron, Loïc J; Banta Lavenex, Pamela; Amaral, David G; Lavenex, Pierre
2017-12-01
Hippocampal damage in adult humans impairs episodic and semantic memory, whereas hippocampal damage early in life impairs episodic memory but leaves semantic learning relatively preserved. We have previously shown a similar behavioral dissociation in nonhuman primates. Hippocampal lesion in adult monkeys prevents allocentric spatial relational learning, whereas spatial learning persists following neonatal lesion. Here, we quantified the number of cells expressing the immediate-early gene c-fos, a marker of neuronal activity, to characterize the functional organization of the medial temporal lobe memory system following neonatal hippocampal lesion. Ninety minutes before brain collection, three control and four adult monkeys with bilateral neonatal hippocampal lesions explored a novel environment to activate brain structures involved in spatial learning. Three other adult monkeys with neonatal hippocampal lesions remained in their housing quarters. In unlesioned monkeys, we found high levels of c-fos expression in the intermediate and caudal regions of the entorhinal cortex, and in the perirhinal, parahippocampal, and retrosplenial cortices. In lesioned monkeys, spatial exploration induced an increase in c-fos expression in the intermediate field of the entorhinal cortex, the perirhinal, parahippocampal, and retrosplenial cortices, but not in the caudal entorhinal cortex. These findings suggest that different regions of the medial temporal lobe memory system may require different types of interaction with the hippocampus in support of memory. The caudal perirhinal cortex, the parahippocampal cortex, and the retrosplenial cortex may contribute to spatial learning in the absence of functional hippocampal circuits, whereas the caudal entorhinal cortex may require hippocampal output to support spatial learning.
Transformations and representations supporting spatial perspective taking
Yu, Alfred B.; Zacks, Jeffrey M.
2018-01-01
Spatial perspective taking is the ability to reason about spatial relations relative to another’s viewpoint. Here, we propose a mechanistic hypothesis that relates mental representations of one’s viewpoint to the transformations used for spatial perspective taking. We test this hypothesis using a novel behavioral paradigm that assays patterns of response time and variation in those patterns across people. The results support the hypothesis that people maintain a schematic representation of the space around their body, update that representation to take another’s perspective, and thereby to reason about the space around their body. This is a powerful computational mechanism that can support imitation, coordination of behavior, and observational learning. PMID:29545731
3D hierarchical spatial representation and memory of multimodal sensory data
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Cerebellar contribution to mental rotation: a cTBS study.
Picazio, Silvia; Oliveri, Massimiliano; Koch, Giacomo; Caltagirone, Carlo; Petrosini, Laura
2013-12-01
A cerebellar role in spatial information processing has been advanced even in the absence of physical manipulation, as occurring in mental rotation. The present study was aimed at investigating the specific involvement of left and right cerebellar hemispheres in two tasks of mental rotation. We used continuous theta burst stimulation to downregulate cerebellar hemisphere excitability in healthy adult subjects performing two mental rotation tasks: an Embodied Mental Rotation (EMR) task, entailing an egocentric strategy, and an Abstract Mental Rotation (AMR) task entailing an allocentric strategy. Following downregulation of left cerebellar hemisphere, reaction times were slower in comparison to sham stimulation in both EMR and AMR tasks. Conversely, identical reaction times were obtained in both tasks following right cerebellar hemisphere and sham stimulations. No effect of cerebellar stimulation side was found on response accuracy. The present findings document a specialization of the left cerebellar hemisphere in mental rotation regardless of the kind of stimulus to be rotated.
Embodied Interaction Priority: Other's Body Part Affects Numeral-Space Mappings.
You, Xuqun; Zhang, Yu; Zhu, Rongjuan; Guo, Yu
2018-01-01
Traditionally, the spatial-numerical association of response codes (SNARC) effect was presented in two-choice condition, in which only one individual reacted to both even (small) and odd (large) numbers. Few studies explored SNARC effect in a social situation. Moreover, there are many reference frames involved in SNARC effect, and it has not yet been investigated which reference frame is dominated when two participants perform the go-nogo task together. In the present study, we investigated which reference frame plays a primary role in SNARC effect when allocentric and egocentric reference frames were consistent or inconsistent in social settings. Furthermore, we explored how two actors corepresent number-space mapping interactively. Results of the two experiments demonstrated that egocentric reference frame was at work primarily when two reference frames were consistent and inconsistent. This shows that body-centered coordinate frames influence number-space mapping in social settings, and one actor may represent another actor's action and tasks.
Language, Perception, and the Schematic Representation of Spatial Relations
ERIC Educational Resources Information Center
Amorapanth, Prin; Kranjec, Alexander; Bromberger, Bianca; Lehet, Matthew; Widick, Page; Woods, Adam J.; Kimberg, Daniel Y.; Chatterjee, Anjan
2012-01-01
Schemas are abstract nonverbal representations that parsimoniously depict spatial relations. Despite their ubiquitous use in maps and diagrams, little is known about their neural instantiation. We sought to determine the extent to which schematic representations are neurally distinguished from language on the one hand, and from rich perceptual…
Representation control increases task efficiency in complex graphical representations.
Moritz, Julia; Meyerhoff, Hauke S; Meyer-Dernbecher, Claudia; Schwan, Stephan
2018-01-01
In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients.
Representation control increases task efficiency in complex graphical representations
Meyerhoff, Hauke S.; Meyer-Dernbecher, Claudia; Schwan, Stephan
2018-01-01
In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients. PMID:29698443
ERIC Educational Resources Information Center
Gao, Zaifeng; Bentin, Shlomo
2011-01-01
Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into…
Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices
Sprague, Thomas C.; Serences, John T.
2014-01-01
Computational theories propose that attention modulates the topographical landscape of spatial ‘priority’ maps in regions of visual cortex so that the location of an important object is associated with higher activation levels. While single-unit recording studies have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here, we used fMRI and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size. PMID:24212672
Spatial displacement of numbers on a vertical number line in spatial neglect.
Mihulowicz, Urszula; Klein, Elise; Nuerk, Hans-Christoph; Willmes, Klaus; Karnath, Hans-Otto
2015-01-01
Previous studies that investigated the association of numbers and space in humans came to contradictory conclusions about the spatial character of the mental number magnitude representation and about how it may be influenced by unilateral spatial neglect. The present study aimed to disentangle the debated influence of perceptual vs. representational aspects via explicit mapping of numbers onto space by applying the number line estimation paradigm with vertical orientation of stimulus lines. Thirty-five acute right-brain damaged stroke patients (6 with neglect) were asked to place two-digit numbers on vertically oriented lines with 0 marked at the bottom and 100 at the top. In contrast to the expected, nearly linear mapping in the control patient group, patients with spatial neglect overestimated the position of numbers in the lower middle range. The results corroborate spatial characteristics of the number magnitude representation. In neglect patients, this representation seems to be biased towards the ipsilesional side, independent of the physical orientation of the task stimuli.
Insight into others' minds: spatio-temporal representations by intrinsic frame of reference.
Sun, Yanlong; Wang, Hongbin
2014-01-01
Recent research has seen a growing interest in connections between domains of spatial and social cognition. Much evidence indicates that processes of representing space in distinct frames of reference (FOR) contribute to basic spatial abilities as well as sophisticated social abilities such as tracking other's intention and belief. Argument remains, however, that belief reasoning in social domain requires an innately dedicated system and cannot be reduced to low-level encoding of spatial relationships. Here we offer an integrated account advocating the critical roles of spatial representations in intrinsic frame of reference. By re-examining the results from a spatial task (Tamborello etal., 2012) and a false-belief task (Onishi and Baillargeon, 2005), we argue that spatial and social abilities share a common origin at the level of spatio-temporal association and predictive learning, where multiple FOR-based representations provide the basic building blocks for efficient and flexible partitioning of the environmental statistics. We also discuss neuroscience evidence supporting these mechanisms. We conclude that FOR-based representations may bridge the conceptual as well as the implementation gaps between the burgeoning fields of social and spatial cognition.
Common Neural Representations for Visually Guided Reorientation and Spatial Imagery
Vass, Lindsay K.; Epstein, Russell A.
2017-01-01
Abstract Spatial knowledge about an environment can be cued from memory by perception of a visual scene during active navigation or by imagination of the relationships between nonvisible landmarks, such as when providing directions. It is not known whether these different ways of accessing spatial knowledge elicit the same representations in the brain. To address this issue, we scanned participants with fMRI, while they performed a judgment of relative direction (JRD) task that required them to retrieve real-world spatial relationships in response to either pictorial or verbal cues. Multivoxel pattern analyses revealed several brain regions that exhibited representations that were independent of the cues to access spatial memory. Specifically, entorhinal cortex in the medial temporal lobe and the retrosplenial complex (RSC) in the medial parietal lobe coded for the heading assumed on a particular trial, whereas the parahippocampal place area (PPA) contained information about the starting location of the JRD. These results demonstrate the existence of spatial representations in RSC, ERC, and PPA that are common to visually guided navigation and spatial imagery. PMID:26759482
Using eye movements to explore mental representations of space.
Fourtassi, Maryam; Rode, Gilles; Pisella, Laure
2017-06-01
Visual mental imagery is a cognitive experience characterised by the activation of the mental representation of an object or scene in the absence of the corresponding stimulus. According to the analogical theory, mental representations have a pictorial nature that preserves the spatial characteristics of the environment that is mentally represented. This cognitive experience shares many similarities with the experience of visual perception, including eye movements. The mental visualisation of a scene is accompanied by eye movements that reflect the spatial content of the mental image, and which can mirror the deformations of this mental image with respect to the real image, such as asymmetries or size reduction. The present article offers a concise overview of the main theories explaining the interactions between eye movements and mental representations, with some examples of the studies supporting them. It also aims to explain how ocular-tracking could be a useful tool in exploring the dynamics of spatial mental representations, especially in pathological situations where these representations can be altered, for instance in unilateral spatial neglect. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Acute administration of THC impairs spatial but not associative memory function in zebrafish.
Ruhl, Tim; Prinz, Nicole; Oellers, Nadine; Seidel, Nathan Ian; Jonas, Annika; Albayram, Onder; Bilkei-Gorzo, Andras; von der Emde, Gerhard
2014-10-01
The present study examined the effect of acute administration of endocannabinoid receptor CB1 ligand ∆-9-tetrahydrocannabinol (THC) on intracellular signalling in the brain and retrieval from two different memory systems in the zebrafish (Danio rerio). First, fish were treated with THC and changes in the phosphorylation level of mitogen-activated protein (MAP) kinases Akt and Erk in the brain were determined 1 h after drug treatment. Next, animals of a second group learned in a two-alternative choice paradigm to discriminate between two colours, whereas a third group solved a spatial cognition task in an open-field maze by use of an ego-allocentric strategy. After memory acquisition and consolidation, animals were pharmacologically treated using the treatment regime as in the first group and then tested again for memory retrieval. We found an enhanced Erk but not Akt phosphorylation suggesting that THC treatment specifically activated Erk signalling in the zebrafish telencephalon. While CB1 agonist THC did not affect behavioural performance of animals in the colour discrimination paradigm, spatial memory was significantly impaired. The effect of THC on spatial learning is probably specific, since neither motor activity nor anxiety-related behaviour was influenced by the drug treatment. That indicates a striking influence of the endocannabinoid system (ECS) on spatial cognition in zebrafish. The results are very coincident with reports on mammals, demonstrating that the ECS is functional highly conserved during vertebrate evolution. We further conclude that the zebrafish provides a promising model organism for ongoing research on the ECS.
Durán, América P.; Duffy, James P.; Gaston, Kevin J.
2014-01-01
Agroecosystems have traditionally been considered incompatible with biological conservation goals, and often been excluded from spatial conservation prioritization strategies. The consequences for the representativeness of identified priority areas have been little explored. Here, we evaluate these for biodiversity and carbon storage representation when agricultural land areas are excluded from a spatial prioritization strategy for South America. Comparing different prioritization approaches, we also assess how the spatial overlap of priority areas changes. The exclusion of agricultural lands was detrimental to biodiversity representation, indicating that priority areas for agricultural production overlap with areas of relatively high occurrence of species. By contrast, exclusion of agricultural lands benefits representation of carbon storage within priority areas, as lands of high value for agriculture and carbon storage overlap little. When agricultural lands were included and equally weighted with biodiversity and carbon storage, a balanced representation resulted. Our findings suggest that with appropriate management, South American agroecosystems can significantly contribute to biodiversity conservation. PMID:25143040
Burgess, Jed D; Arnold, Sara L; Fitzgibbon, Bernadette M; Fitzgerald, Paul B; Enticott, Peter G
2013-01-01
Mirror neurons are a class of motor neuron that are active during both the performance and observation of behavior, and have been implicated in interpersonal understanding. There is evidence to suggest that the mirror response is modulated by the perspective from which an action is presented (e.g., egocentric or allocentric). Most human research, however, has only examined this when presenting intransitive actions. Twenty-three healthy adult participants completed a transcranial magnetic stimulation experiment that assessed corticospinal excitability whilst viewing transitive hand gestures from both egocentric (i.e., self) and allocentric (i.e., other) viewpoints. Although action observation was associated with increases in corticospinal excitability (reflecting putative human mirror neuron activity), there was no effect of visual perspective. These findings are discussed in the context of contemporary theories of mirror neuron ontogeny, including models concerning associative learning and evolutionary adaptation.
The Impact of Conflicting Spatial Representations in Airborne Unmanned Aerial System Sensor Control
2016-02-01
Spatial Discordance 1 Running head: SPATIAL DISCORDANCE IN AIRBORNE UAS OPERATIONS The impact of conflicting spatial...representations in airborne unmanned aerial system sensor control Joseph W Geeseman, James E Patrey, Caroline Davy, Katherine Peditto, & Christine Zernickow...system (UAS) simulation while riding in the fuselage of an airborne Lockheed P-3 Orion. The P-3 flew a flight profile of intermittent ascending
NASA Astrophysics Data System (ADS)
Shi, X.; Zhao, C.
2017-12-01
Haze aerosol pollution has been a focus issue in China, and its characteristics is highly demanded. With limited observation sites, aerosol properties obtained from a single site is frequently used to represent the haze condition over a large domain, such as tens of kilometers. This could result in high uncertainties in the haze characteristics due to their spatial variation. Using a network observation from November 2015 to February 2016 over an urban city in North China with high spatial resolution, this study examines the spatial representation of ground site observations. A method is first developed to determine the representative area of measurements from limited stations. The key idea of this method is to determine the spatial variability of particulate matter with diameters less than 2.5 μm (PM2.5) concentration using a variance function in 2km x 2km grids. Based on the high spatial resolution (0.5km x 0.5km) measurements of PM2.5, the grids in which PM2.5 have high correlations and weak value differences are determined as the representation area of measurements at these grids. Note that the size representation area is not exactly a circle region. It shows that the size representation are for the study region and study period ranges from 0.25 km2 to 16.25 km2. The representation area varies with locations. For the 20 km x 20 km study region, 10 station observations would have a good representation of the PM2.5 observations obtained from current 169 stations at the four-month time scale.
The Spatial and the Visual in Mental Spatial Reasoning: An Ill-Posed Distinction
NASA Astrophysics Data System (ADS)
Schultheis, Holger; Bertel, Sven; Barkowsky, Thomas; Seifert, Inessa
It is an ongoing and controversial debate in cognitive science which aspects of knowledge humans process visually and which ones they process spatially. Similarly, artificial intelligence (AI) and cognitive science research, in building computational cognitive systems, tended to use strictly spatial or strictly visual representations. The resulting systems, however, were suboptimal both with respect to computational efficiency and cognitive plau sibility. In this paper, we propose that the problems in both research strands stem from a mis conception of the visual and the spatial in mental spatial knowl edge pro cessing. Instead of viewing the visual and the spatial as two clearly separable categories, they should be conceptualized as the extremes of a con tinuous dimension of representation. Regarding psychology, a continuous di mension avoids the need to exclusively assign processes and representations to either one of the cate gories and, thus, facilitates a more unambiguous rating of processes and rep resentations. Regarding AI and cognitive science, the con cept of a continuous spatial / visual dimension provides the possibility of rep re sentation structures which can vary continuously along the spatial / visual di mension. As a first step in exploiting these potential advantages of the pro posed conception we (a) introduce criteria allowing for a non-dichotomic judgment of processes and representations and (b) present an approach towards rep re sentation structures that can flexibly vary along the spatial / visual dimension.
Visualising elastic anisotropy: theoretical background and computational implementation
NASA Astrophysics Data System (ADS)
Nordmann, J.; Aßmus, M.; Altenbach, H.
2018-02-01
In this article, we present the technical realisation for visualisations of characteristic parameters of the fourth-order elasticity tensor, which is classified by three-dimensional symmetry groups. Hereby, expressions for spatial representations of uc(Young)'s modulus and bulk modulus as well as plane representations of shear modulus and uc(Poisson)'s ratio are derived and transferred into a comprehensible form to computer algebra systems. Additionally, we present approaches for spatial representations of both latter parameters. These three- and two-dimensional representations are implemented into the software MATrix LABoratory. Exemplary representations of characteristic materials complete the present treatise.
Developmental dyscalculia is related to visuo-spatial memory and inhibition impairment☆
Szucs, Denes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence
2013-01-01
Developmental dyscalculia is thought to be a specific impairment of mathematics ability. Currently dominant cognitive neuroscience theories of developmental dyscalculia suggest that it originates from the impairment of the magnitude representation of the human brain, residing in the intraparietal sulcus, or from impaired connections between number symbols and the magnitude representation. However, behavioral research offers several alternative theories for developmental dyscalculia and neuro-imaging also suggests that impairments in developmental dyscalculia may be linked to disruptions of other functions of the intraparietal sulcus than the magnitude representation. Strikingly, the magnitude representation theory has never been explicitly contrasted with a range of alternatives in a systematic fashion. Here we have filled this gap by directly contrasting five alternative theories (magnitude representation, working memory, inhibition, attention and spatial processing) of developmental dyscalculia in 9–10-year-old primary school children. Participants were selected from a pool of 1004 children and took part in 16 tests and nine experiments. The dominant features of developmental dyscalculia are visuo-spatial working memory, visuo-spatial short-term memory and inhibitory function (interference suppression) impairment. We hypothesize that inhibition impairment is related to the disruption of central executive memory function. Potential problems of visuo-spatial processing and attentional function in developmental dyscalculia probably depend on short-term memory/working memory and inhibition impairments. The magnitude representation theory of developmental dyscalculia was not supported. PMID:23890692
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-01-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703
NASA Astrophysics Data System (ADS)
Price, Aaron; Lee, Hee-Sun
2010-02-01
We investigated whether and how student performance on three types of spatial cognition tasks differs when worked with two-dimensional or stereoscopic representations. We recruited nineteen middle school students visiting a planetarium in a large Midwestern American city and analyzed their performance on a series of spatial cognition tasks in terms of response accuracy and task completion time. Results show that response accuracy did not differ between the two types of representations while task completion time was significantly greater with the stereoscopic representations. The completion time increased as the number of mental manipulations of 3D objects increased in the tasks. Post-interviews provide evidence that some students continued to think of stereoscopic representations as two-dimensional. Based on cognitive load and cue theories, we interpret that, in the absence of pictorial depth cues, students may need more time to be familiar with stereoscopic representations for optimal performance. In light of these results, we discuss potential uses of stereoscopic representations for science learning.
Milner-Bolotin, Marina; Nashon, Samson Madera
2012-02-01
Science, engineering and mathematics-related disciplines have relied heavily on a researcher's ability to visualize phenomena under study and being able to link and superimpose various abstract and concrete representations including visual, spatial, and temporal. The spatial representations are especially important in all branches of biology (in developmental biology time becomes an important dimension), where 3D and often 4D representations are crucial for understanding the phenomena. By the time biology students get to undergraduate education, they are supposed to have acquired visual-spatial thinking skills, yet it has been documented that very few undergraduates and a small percentage of graduate students have had a chance to develop these skills to a sufficient degree. The current paper discusses the literature that highlights the essence of visual-spatial thinking and the development of visual-spatial literacy, considers the application of the visual-spatial thinking to biology education, and proposes how modern technology can help to promote visual-spatial literacy and higher order thinking among undergraduate students of biology.
Starc, Martina; Anticevic, Alan; Repovš, Grega
2017-05-01
Pupillometry provides an accessible option to track working memory processes with high temporal resolution. Several studies showed that pupil size increases with the number of items held in working memory; however, no study has explored whether pupil size also reflects the quality of working memory representations. To address this question, we used a spatial working memory task to investigate the relationship of pupil size with spatial precision of responses and indicators of reliance on generalized spatial categories. We asked 30 participants (15 female, aged 19-31) to remember the position of targets presented at various locations along a hidden radial grid. After a delay, participants indicated the remembered location with a high-precision joystick providing a parametric measure of trial-to-trial accuracy. We recorded participants' pupil dilations continuously during task performance. Results showed a significant relation between pupil dilation during preparation/early encoding and the precision of responses, possibly reflecting the attentional resources devoted to memory encoding. In contrast, pupil dilation at late maintenance and response predicted larger shifts of responses toward prototypical locations, possibly reflecting larger reliance on categorical representation. On an intraindividual level, smaller pupil dilations during encoding predicted larger dilations during late maintenance and response. On an interindividual level, participants relying more on categorical representation also produced larger precision errors. The results confirm the link between pupil size and the quality of spatial working memory representation. They suggest compensatory strategies of spatial working memory performance-loss of precise spatial representation likely increases reliance on generalized spatial categories. © 2017 Society for Psychophysiological Research.
ERIC Educational Resources Information Center
De Sá Teixeira, Nuno; Oliveira, Armando Mónica
2014-01-01
The spatial memory for the last position occupied by a moving target is usually displaced forward in the direction of motion. Interpreted as a mental analogue of physical momentum, this phenomenon was coined "representational momentum" (RM). As momentum is given by the product of an object's velocity and mass, both these factors came to…
Spatial representation of pitch height: the SMARC effect.
Rusconi, Elena; Kwan, Bonnie; Giordano, Bruno L; Umiltà, Carlo; Butterworth, Brian
2006-03-01
Through the preferential pairing of response positions to pitch, here we show that the internal representation of pitch height is spatial in nature and affects performance, especially in musically trained participants, when response alternatives are either vertically or horizontally aligned. The finding that our cognitive system maps pitch height onto an internal representation of space, which in turn affects motor performance even when this perceptual attribute is irrelevant to the task, extends previous studies on auditory perception and suggests an interesting analogy between music perception and mathematical cognition. Both the basic elements of mathematical cognition (i.e. numbers) and the basic elements of musical cognition (i.e. pitches), appear to be mapped onto a mental spatial representation in a way that affects motor performance.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
Durán, América P; Duffy, James P; Gaston, Kevin J
2014-10-07
Agroecosystems have traditionally been considered incompatible with biological conservation goals, and often been excluded from spatial conservation prioritization strategies. The consequences for the representativeness of identified priority areas have been little explored. Here, we evaluate these for biodiversity and carbon storage representation when agricultural land areas are excluded from a spatial prioritization strategy for South America. Comparing different prioritization approaches, we also assess how the spatial overlap of priority areas changes. The exclusion of agricultural lands was detrimental to biodiversity representation, indicating that priority areas for agricultural production overlap with areas of relatively high occurrence of species. By contrast, exclusion of agricultural lands benefits representation of carbon storage within priority areas, as lands of high value for agriculture and carbon storage overlap little. When agricultural lands were included and equally weighted with biodiversity and carbon storage, a balanced representation resulted. Our findings suggest that with appropriate management, South American agroecosystems can significantly contribute to biodiversity conservation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory
Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.
2013-01-01
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773
A study of kindergarten children's spatial representation in a mapping project
NASA Astrophysics Data System (ADS)
Davis, Genevieve A.; Hyun, Eunsook
2005-02-01
This phenomenological study examined kindergarten children's development of spatial representation in a year long mapping project. Findings and discussion relative to how children conceptualised and represented physical space are presented in light of theoretical notions advanced by Piaget, van Hiele, and cognitive science researchers Battista and Clements. Analyses of the processes the children used and their finished products indicate that children can negotiate meaning for complex systems of geometric concepts when given opportunities to debate, negotiate, reflect, evaluate and seek meaning for representing space. The complexity and "holistic" nature of spatial representation of young children emerged in this study.
An investigation of spatial representation of pitch in individuals with congenital amusia.
Lu, Xuejing; Sun, Yanan; Thompson, William Forde
2017-09-01
Spatial representation of pitch plays a central role in auditory processing. However, it is unknown whether impaired auditory processing is associated with impaired pitch-space mapping. Experiment 1 examined spatial representation of pitch in individuals with congenital amusia using a stimulus-response compatibility (SRC) task. For amusic and non-amusic participants, pitch classification was faster and more accurate when correct responses involved a physical action that was spatially congruent with the pitch height of the stimulus than when it was incongruent. However, this spatial representation of pitch was not as stable in amusic individuals, revealed by slower response times when compared with control individuals. One explanation is that the SRC effect in amusics reflects a linguistic association, requiring additional time to link pitch height and spatial location. To test this possibility, Experiment 2 employed a colour-classification task. Participants judged colour while ignoring a concurrent pitch by pressing one of two response keys positioned vertically to be congruent or incongruent with the pitch. The association between pitch and space was found in both groups, with comparable response times in the two groups, suggesting that amusic individuals are only slower to respond to tasks involving explicit judgments of pitch.
Audio Spatial Representation Around the Body
Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica
2017-01-01
Studies have found that portions of space around our body are differently coded by our brain. Numerous works have investigated visual and auditory spatial representation, focusing mostly on the spatial representation of stimuli presented at head level, especially in the frontal space. Only few studies have investigated spatial representation around the entire body and its relationship with motor activity. Moreover, it is still not clear whether the space surrounding us is represented as a unitary dimension or whether it is split up into different portions, differently shaped by our senses and motor activity. To clarify these points, we investigated audio localization of dynamic and static sounds at different body levels. In order to understand the role of a motor action in auditory space representation, we asked subjects to localize sounds by pointing with the hand or the foot, or by giving a verbal answer. We found that the audio sound localization was different depending on the body part considered. Moreover, a different pattern of response was observed when subjects were asked to make actions with respect to the verbal responses. These results suggest that the audio space around our body is split in various spatial portions, which are perceived differently: front, back, around chest, and around foot, suggesting that these four areas could be differently modulated by our senses and our actions. PMID:29249999
Profile of biology prospective teachers’ representation on plant anatomy learning
NASA Astrophysics Data System (ADS)
Ermayanti; Susanti, R.; Anwar, Y.
2018-04-01
This study aims to obtaining students’ representation ability in understanding the structure and function of plant tissues in plant anatomy course. Thirty students of The Biology Education Department of Sriwijaya University were involved in this study. Data on representation ability were collected using test and observation. The instruments had been validated by expert judgment. Test scores were used to represent students’ ability in 4 categories: 2D-image, 3D-image, spatial, and verbal representations. The results show that students’ representation ability is still low: 2D-image (40.0), 3D-image (25.0), spatial (20.0), and verbal representation (45.0). Based on the results of this study, it is suggested that instructional strategies be developed for plant anatomy course.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Developmental dyscalculia is related to visuo-spatial memory and inhibition impairment.
Szucs, Denes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence
2013-01-01
Developmental dyscalculia is thought to be a specific impairment of mathematics ability. Currently dominant cognitive neuroscience theories of developmental dyscalculia suggest that it originates from the impairment of the magnitude representation of the human brain, residing in the intraparietal sulcus, or from impaired connections between number symbols and the magnitude representation. However, behavioral research offers several alternative theories for developmental dyscalculia and neuro-imaging also suggests that impairments in developmental dyscalculia may be linked to disruptions of other functions of the intraparietal sulcus than the magnitude representation. Strikingly, the magnitude representation theory has never been explicitly contrasted with a range of alternatives in a systematic fashion. Here we have filled this gap by directly contrasting five alternative theories (magnitude representation, working memory, inhibition, attention and spatial processing) of developmental dyscalculia in 9-10-year-old primary school children. Participants were selected from a pool of 1004 children and took part in 16 tests and nine experiments. The dominant features of developmental dyscalculia are visuo-spatial working memory, visuo-spatial short-term memory and inhibitory function (interference suppression) impairment. We hypothesize that inhibition impairment is related to the disruption of central executive memory function. Potential problems of visuo-spatial processing and attentional function in developmental dyscalculia probably depend on short-term memory/working memory and inhibition impairments. The magnitude representation theory of developmental dyscalculia was not supported. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Kozhevnikov, Maria; Dhond, Rupali P.
2012-01-01
Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003
Image Quality Assessment Using the Joint Spatial/Spatial-Frequency Representation
NASA Astrophysics Data System (ADS)
Beghdadi, Azeddine; Iordache, Răzvan
2006-12-01
This paper demonstrates the usefulness of spatial/spatial-frequency representations in image quality assessment by introducing a new image dissimilarity measure based on 2D Wigner-Ville distribution (WVD). The properties of 2D WVD are shortly reviewed, and the important issue of choosing the analytic image is emphasized. The WVD-based measure is shown to be correlated with subjective human evaluation, which is the premise towards an image quality assessor developed on this principle.
Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.
Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan
2018-05-23
Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison. Copyright © 2018 the authors 0270-6474/18/384996-12$15.00/0.
Krüger, Markus; Jahn, Georg
2015-01-01
Children as young as 3 years can remember an object's location within an arrangement and can retrieve it from a novel viewpoint (Nardini et al., 2006). However, this ability is impaired if the arrangement is rotated to compensate for the novel viewpoint, or, if the arrangement is rotated and children stand still. There are two dominant explanations for this phenomenon: self-motion induces an automatic spatial updating process which is beneficial if children move around the arrangement, but misleading if the children's movement is matched by the arrangement and not activated if children stand still and only the arrangement is moved (see spatial updating; Simons and Wang, 1998). Another explanation concerns reference frames: spatial representations might depend on peripheral spatial relations concerning the surrounding room instead on proximal relations within the arrangement, even if these proximal relations are sufficient or more informative. To evaluate these possibilities, we rotated children (N = 120) aged between 3 and 6 years with an occluded arrangement. When the arrangement was in misalignment to the surrounding room, 3- and 4-year-olds' spatial memory was impaired and 5-year-olds' was lightly impaired suggesting that they relied on peripheral references of the surrounding room for retrieval. In contrast, 6-years-olds' spatial representation seemed robust against misalignment indicating a successful integration of spatial representations.
Krüger, Markus; Jahn, Georg
2015-01-01
Children as young as 3 years can remember an object’s location within an arrangement and can retrieve it from a novel viewpoint (Nardini et al., 2006). However, this ability is impaired if the arrangement is rotated to compensate for the novel viewpoint, or, if the arrangement is rotated and children stand still. There are two dominant explanations for this phenomenon: self-motion induces an automatic spatial updating process which is beneficial if children move around the arrangement, but misleading if the children’s movement is matched by the arrangement and not activated if children stand still and only the arrangement is moved (see spatial updating; Simons and Wang, 1998). Another explanation concerns reference frames: spatial representations might depend on peripheral spatial relations concerning the surrounding room instead on proximal relations within the arrangement, even if these proximal relations are sufficient or more informative. To evaluate these possibilities, we rotated children (N = 120) aged between 3 and 6 years with an occluded arrangement. When the arrangement was in misalignment to the surrounding room, 3- and 4-year-olds’ spatial memory was impaired and 5-year-olds’ was lightly impaired suggesting that they relied on peripheral references of the surrounding room for retrieval. In contrast, 6-years-olds’ spatial representation seemed robust against misalignment indicating a successful integration of spatial representations. PMID:26617537
Tackling the 2nd V: Big Data, Variety and the Need for Representation Consistency
NASA Astrophysics Data System (ADS)
Clune, T.; Kuo, K. S.
2016-12-01
While Big Data technologies are transforming our ability to analyze ever larger volumes of Earth science data, practical constraints continue to limit our ability to compare data across datasets from different sources in an efficient and robust manner. Within a single data collection, invariants such as file format, grid type, and spatial resolution greatly simplify many types of analysis (often implicitly). However, when analysis combines data across multiple data collections, researchers are generally required to implement data transformations (i.e., "data preparation") to provide appropriate invariants. These transformation include changing of file formats, ingesting into a database, and/or regridding to a common spatial representation, and they can either be performed once, statically, or each time the data is accessed. At the very least, this process is inefficient from the perspective of the community as each team selects its own representation and privately implements the appropriate transformations. No doubt there are disadvantages to any "universal" representation, but we posit that major benefits would be obtained if a suitably flexible spatial representation could be standardized along with tools for transforming to/from that representation. We regard this as part of the historic trend in data publishing. Early datasets used ad hoc formats and lacked metadata. As better tools evolved, published data began to use standardized formats (e.g., HDF and netCDF) with attached metadata. We propose that the modern need to perform analysis across data sets should drive a new generation of tools that support a standardized spatial representation. More specifically, we propose the hierarchical triangular mesh (HTM) as a suitable "generic" resolution that permits standard transformations to/from native representations in use today, as well as tools to convert/regrid existing datasets onto that representation.
Golani, Ilan
2012-06-01
In this review I focus on how three methodological principles advocated by Philip Teitelbaum influenced my work to this day: that similar principles of organization should be looked for in ontogeny and recovery of function; that the order of emergence of behavioral components provides a view on the organization of that behavior; and that the components of behavior should be exhibited by the animal itself in relatively pure form. I start by showing how these principles influenced our common work on the developmental dynamics of rodent egocentric space, and then proceed to describe how these principles affected my work with Yoav Benjamini and others on the developmental dynamics of rodent allocentric space. We analyze issues traditionally addressed by physiological psychologists with methods borrowed from ethology, EW (Eshkol-Wachman) movement notation, dynamical systems and exploratory data analysis. Then we show how the natural origins of axes embodied by the behavior of the organism itself, are used by us as the origins of axes for the measurement of the developmental moment-by-moment dynamics of behavior. Using this methodology we expose similar principles of organization across situations, species and preparations, provide a developmental view on the organization of behavior, expose the natural components of behavior in relatively pure form, and reveal how low level primitives generate higher level constructs. Advances in tracking technology should allow us to study how movements in egocentric and allocentric spaces interlace. Tracking of multi-limb coordination, progress in online recording of neural activity in freely moving animals, and the unprecedented accumulation of genetically engineered mouse preparations makes the behavioral ground plan exposed in this review essential for a systematic study of the brain/behavior interface. Copyright © 2012 Elsevier B.V. All rights reserved.
van der Kamp, John; de Wit, Matthieu M; Masters, Rich S W
2012-04-01
We investigated whether the control of movement of the left hand is more likely to involve the use of allocentric information than movements performed with the right hand. Previous studies (Gonzalez et al. in J Neurophys 95:3496-3501, 2006; De Grave et al. in Exp Br Res 193:421-427, 2009) have reported contradictory findings in this respect. In the present study, right-handed participants (N = 12) and left-handed participants (N = 12) made right- and left-handed grasps to foveated objects and peripheral, non-foveated objects that were located in the right or left visual hemifield and embedded within a Müller-Lyer illusion. They were also asked to judge the size of the object by matching their hand aperture to its length. Hand apertures did not show significant differences in illusory bias as a function of hand used, handedness or visual hemifield. However, the illusory effect was significantly larger for perception than for action, and for the non-foveated compared to foveated objects. No significant illusory biases were found for reach movement times. These findings are consistent with the two-visual system model that holds that the use of allocentric information is more prominent in perception than in movement control. We propose that the increased involvement of allocentric information in movements toward peripheral, non-foveated objects may be a consequence of more awkward, less automatized grasps of nonfoveated than foveated objects. The current study does not support the conjecture that the control of left-handed and right-handed grasps is predicated on different sources of information.
Ten Brink, Antonia F; Matthijs Biesbroek, J; Oort, Quirien; Visser-Meily, Johanna M A; Nijboer, Tanja C W
2018-06-22
Visuospatial neglect can occur in peripersonal and extrapersonal space. The dorsal visual pathway is hypothesized to be associated with peripersonal, and the ventral pathway with extrapersonal neglect. We aimed to evaluate neural substrates of peripersonal versus extrapersonal neglect, separately for egocentric and allocentric frames of reference. This was a retrospective study, including stroke patients admitted for inpatient rehabilitation. Approximately 1 month post-stroke onset, computerized cancellation (egocentric) and bisection tasks (egocentric and allocentric) were administered at 30 cm and 120 cm. We collected CT or MRI scans and performed voxel-based lesion-symptom mapping for the cancellation, and subtraction analyses for the line bisection task. We included 98 patients for the cancellation and 129 for the bisection analyses. The right parahippocampal gyrus, hippocampus, and thalamus were associated with egocentric peripersonal neglect as measured with cancellation. These areas were also associated with extrapersonal neglect, together with the right superior parietal lobule, angular gyrus, supramarginal gyrus, lateral occipital cortex, planum temporale and superior temporal gyrus. Lesions in the right parietal, temporal and frontal areas were associated with both peripersonal and extrapersonal egocentric neglect as measured with bisection. For allocentric neglect no clear pattern of associated brain regions was observed. We found right hemispheric anatomical correlates for peripersonal and extrapersonal neglect. However, no brain areas were uniquely associated with peripersonal neglect, meaning we could not conclusively verify the ventral/dorsal hypothesis. Several areas were uniquely associated with egocentric extrapersonal neglect, suggesting that these brain areas can be specifically involved in extrapersonal, but not in peripersonal, attention processes. Copyright © 2018. Published by Elsevier B.V.
Cortical dynamics of three-dimensional figure-ground perception of two-dimensional pictures.
Grossberg, S
1997-07-01
This article develops the FACADE theory of 3-dimensional (3-D) vision and figure-ground separation to explain data concerning how 2-dimensional pictures give rise to 3-D percepts of occluding and occluded objects. The model describes how geometrical and contrastive properties of a picture can either cooperate or compete when forming the boundaries and surface representation that subserve conscious percepts. Spatially long-range cooperation and spatially short-range competition work together to separate the boundaries of occluding figures from their occluded neighbors. This boundary ownership process is sensitive to image T junctions at which occluded figures contact occluding figures. These boundaries control the filling-in of color within multiple depth-sensitive surface representations. Feedback between surface and boundary representations strengthens consistent boundaries while inhibiting inconsistent ones. Both the boundary and the surface representations of occluded objects may be amodally completed, while the surface representations of unoccluded objects become visible through modal completion. Functional roles for conscious modal and amodal representations in object recognition, spatial attention, and reaching behaviors are discussed. Model interactions are interpreted in terms of visual, temporal, and parietal cortices.
Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-06-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Thinking Egyptian: Active Models for Understanding Spatial Representation.
ERIC Educational Resources Information Center
Schiferl, Ellen
This paper highlights how introductory textbooks on Egyptian art inhibit understanding by reinforcing student preconceptions, and demonstrates another approach to discussing space with a classroom exercise and software. The alternative approach, an active model for spatial representation, introduced here was developed by adapting classroom…
Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera.
Hong, T H; Shneier, M O
1985-06-01
This correspondence describes a method of building and maintaining a spatial respresentation for the workspace of a robot, using a sensor that moves about in the world. From the known camera position at which an image is obtained, and two-dimensional silhouettes of the image, a series of cones is projected to describe the possible positions of the objects in the space. When an object is seen from several viewpoints, the intersections of the cones constrain the position and size of the object. After several views have been processed, the representation of the object begins to resemble its true shape. At all times, the spatial representation contains the best guess at the true situation in the world with uncertainties in position and shape explicitly represented. An octree is used as the data structure for the representation. It not only provides a relatively compact representation, but also allows fast access to information and enables large parts of the workspace to be ignored. The purpose of constructing this representation is not so much to recognize objects as to describe the volumes in the workspace that are occupied and those that are empty. This enables trajectory planning to be carried out, and also provides a means of spatially indexing objects without needing to represent the objects at an extremely fine resolution. The spatial representation is one part of a more complex representation of the workspace used by the sensory system of a robot manipulator in understanding its environment.
Multi-step routes of capuchin monkeys in a laser pointer traveling salesman task.
Howard, Allison M; Fragaszy, Dorothy M
2014-09-01
Prior studies have claimed that nonhuman primates plan their routes multiple steps in advance. However, a recent reexamination of multi-step route planning in nonhuman primates indicated that there is no evidence for planning more than one step ahead. We tested multi-step route planning in capuchin monkeys using a pointing device to "travel" to distal targets while stationary. This device enabled us to determine whether capuchins distinguish the spatial relationship between goals and themselves and spatial relationships between goals and the laser dot, allocentrically. In Experiment 1, two subjects were presented with identical food items in Near-Far (one item nearer to subject) and Equidistant (both items equidistant from subject) conditions with a laser dot visible between the items. Subjects moved the laser dot to the items using a joystick. In the Near-Far condition, one subject demonstrated a bias for items closest to self but the other subject chose efficiently. In the second experiment, subjects retrieved three food items in similar Near-Far and Equidistant arrangements. Both subjects preferred food items nearest the laser dot and showed no evidence of multi-step route planning. We conclude that these capuchins do not make choices on the basis of multi-step look ahead strategies. © 2014 Wiley Periodicals, Inc.
Integrating spatially explicit representations of landscape perceptions into land change research
Dorning, Monica; Van Berkel, Derek B.; Semmens, Darius J.
2017-01-01
Purpose of ReviewHuman perceptions of the landscape can influence land-use and land-management decisions. Recognizing the diversity of landscape perceptions across space and time is essential to understanding land change processes and emergent landscape patterns. We summarize the role of landscape perceptions in the land change process, demonstrate advances in quantifying and mapping landscape perceptions, and describe how these spatially explicit techniques have and may benefit land change research.Recent FindingsMapping landscape perceptions is becoming increasingly common, particularly in research focused on quantifying ecosystem services provision. Spatial representations of landscape perceptions, often measured in terms of landscape values and functions, provide an avenue for matching social and environmental data in land change studies. Integrating these data can provide new insights into land change processes, contribute to landscape planning strategies, and guide the design and implementation of land change models.SummaryChallenges remain in creating spatial representations of human perceptions. Maps must be accompanied by descriptions of whose perceptions are being represented and the validity and uncertainty of those representations across space. With these considerations, rapid advancements in mapping landscape perceptions hold great promise for improving representation of human dimensions in landscape ecology and land change research.
Allocentric-heading recall and its relation to self-reported sense-of-direction.
Sholl, M Jeanne; Kenny, Ryan J; DellaPorta, Katherine A
2006-05-01
A sense of direction (SOD) computes the body's facing direction relative to a reference frame grounded in the environment. The authors report on three experiments in which they used a heading-recall task to tap the functioning of a SOD system and then correlated task performance with self-reported SOD as a convergent test of the task's construct validity. On each heading-recall trial, the participant judged the photographer's allocentric heading when photographing a pictured outdoor scene. Participants were tested over the full range of SOD ratings in Experiment 1, and in Experiments 2 and 3 heading-recall at the SOD extremes was tested. In all experiments, there was wide variability in heading-recall accuracy that covaried with self-rated SOD. Parametric manipulation of various task parameters revealed some likely functional properties of the SOD system. The results support the psychological reality of a SOD system and further indicate that there are large individual differences in the efficacy with which the system functions.
Effects of vision on head-putter coordination in golf.
Gonzalez, David Antonio; Kegel, Stefan; Ishikura, Tadao; Lee, Tim
2012-07-01
Low-skill golfers coordinate the movements of their head and putter with an allocentric, isodirectional coupling, which is opposite to the allocentric, antidirectional coordination pattern used by experts (Lee, Ishikura, Kegel, Gonzalez, & Passmore, 2008). The present study investigated the effects of four vision conditions (full vision, no vision, target focus, and ball focus) on head-putter coupling in low-skill golfers. Performance in the absence of vision resulted in a level of high isodirectional coupling that was similar to the full vision condition. However, when instructed to focus on the target during the putt, or focus on the ball through a restricted viewing angle, low-skill golfers significantly decoupled the head--putter coordination pattern. However, outcome measures demonstrated that target focus resulted in poorer performance compared with the other visual conditions, thereby providing overall support for use of a ball focus strategy to enhance coordination and outcome performance. Focus of attention and reduced visual tracking were hypothesized as potential reasons for the decoupling.
Linguistic and Perceptual Mapping in Spatial Representations: An Attentional Account.
Valdés-Conroy, Berenice; Hinojosa, José A; Román, Francisco J; Romero-Ferreiro, Verónica
2018-03-01
Building on evidence for embodied representations, we investigated whether Spanish spatial terms map onto the NEAR/FAR perceptual division of space. Using a long horizontal display, we measured congruency effects during the processing of spatial terms presented in NEAR or FAR space. Across three experiments, we manipulated the task demands in order to investigate the role of endogenous attention in linguistic and perceptual space mapping. We predicted congruency effects only when spatial properties were relevant for the task (reaching estimation task, Experiment 1) but not when attention was allocated to other features (lexical decision, Experiment 2; and color, Experiment 3). Results showed faster responses for words presented in Near-space in all experiments. Consistent with our hypothesis, congruency effects were observed only when a reaching estimate was requested. Our results add important evidence for the role of top-down processing in congruency effects from embodied representations of spatial terms. Copyright © 2017 Cognitive Science Society, Inc.
Two spatial memories are not better than one: evidence of exclusivity in memory for object location.
Baguley, Thom; Lansdale, Mark W; Lines, Lorna K; Parkin, Jennifer K
2006-05-01
This paper studies the dynamics of attempting to access two spatial memories simultaneously and its implications for the accuracy of recall. Experiment 1 demonstrates in a range of conditions that two cues pointing to different experiences of the same object location produce little or no higher recall than that observed with a single cue. Experiment 2 confirms this finding in a within-subject design where both cues have previously elicited recall. Experiment 3 shows that these findings are only consistent with a model in which two representations of the same object location are mutually exclusive at both encoding and retrieval, and inconsistent with models that assume information from both representations is available. We propose that these representations quantify directionally specific judgments of location relative to specific anchor points in the stimulus; a format that precludes the parallel processing of like representations. Finally, we consider the apparent paradox of how such representations might contribute to the acquisition of spatial knowledge from multiple experiences of the same stimuli.
Representations and processes of human spatial competence.
Gunzelmann, Glenn; Lyon, Don R
2011-10-01
This article presents an approach to understanding human spatial competence that focuses on the representations and processes of spatial cognition and how they are integrated with cognition more generally. The foundational theoretical argument for this research is that spatial information processing is central to cognition more generally, in the sense that it is brought to bear ubiquitously to improve the adaptivity and effectiveness of perception, cognitive processing, and motor action. We describe research spanning multiple levels of complexity to understand both the detailed mechanisms of spatial cognition, and how they are utilized in complex, naturalistic tasks. In the process, we discuss the critical role of cognitive architectures in developing a consistent account that spans this breadth, and we note some areas in which the current version of a popular architecture, ACT-R, may need to be augmented. Finally, we suggest a framework for understanding the representations and processes of spatial competence and their role in human cognition generally. Copyright © 2011 Cognitive Science Society, Inc.
Analysis of students’ spatial thinking in geometry: 3D object into 2D representation
NASA Astrophysics Data System (ADS)
Fiantika, F. R.; Maknun, C. L.; Budayasa, I. K.; Lukito, A.
2018-05-01
The aim of this study is to find out the spatial thinking process of students in transforming 3-dimensional (3D) object to 2-dimensional (2D) representation. Spatial thinking is helpful in using maps, planning routes, designing floor plans, and creating art. The student can engage geometric ideas by using concrete models and drawing. Spatial thinking in this study is identified through geometrical problems of transforming a 3-dimensional object into a 2-dimensional object image. The problem was resolved by the subject and analyzed by reference to predetermined spatial thinking indicators. Two representative subjects of elementary school were chosen based on mathematical ability and visual learning style. Explorative description through qualitative approach was used in this study. The result of this study are: 1) there are different representations of spatial thinking between a boy and a girl object, 2) the subjects has their own way to invent the fastest way to draw cube net.
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet
Rolls, Edmund T.
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777
Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.
Rolls, Edmund T
2012-01-01
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
Moran, Mika R; Eizenberg, Efrat; Plaut, Pnina
2017-06-06
The literature on environmental walkability to date has mainly focused on walking and related health outcomes. While previous studies suggest associations between walking and spatial knowledge, the associations between environmental walkability and spatial knowledge is yet to be explored. The current study addresses this lacuna in research by exploring children's mental representations of their home-school (h-s) route, vis.
Topological Schemas of Memory Spaces.
Babichev, Andrey; Dabaghian, Yuri A
2018-01-01
Hippocampal cognitive map-a neuronal representation of the spatial environment-is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework-the memory space-that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as "networks of interconnections among the representations of events," have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature-a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure.
Topological Schemas of Memory Spaces
Babichev, Andrey; Dabaghian, Yuri A.
2018-01-01
Hippocampal cognitive map—a neuronal representation of the spatial environment—is widely discussed in the computational neuroscience literature for decades. However, more recent studies point out that hippocampus plays a major role in producing yet another cognitive framework—the memory space—that incorporates not only spatial, but also non-spatial memories. Unlike the cognitive maps, the memory spaces, broadly understood as “networks of interconnections among the representations of events,” have not yet been studied from a theoretical perspective. Here we propose a mathematical approach that allows modeling memory spaces constructively, as epiphenomena of neuronal spiking activity and thus to interlink several important notions of cognitive neurophysiology. First, we suggest that memory spaces have a topological nature—a hypothesis that allows treating both spatial and non-spatial aspects of hippocampal function on equal footing. We then model the hippocampal memory spaces in different environments and demonstrate that the resulting constructions naturally incorporate the corresponding cognitive maps and provide a wider context for interpreting spatial information. Lastly, we propose a formal description of the memory consolidation process that connects memory spaces to the Morris' cognitive schemas-heuristic representations of the acquired memories, used to explain the dynamics of learning and memory consolidation in a given environment. The proposed approach allows evaluating these constructs as the most compact representations of the memory space's structure. PMID:29740306
Models as Feedback: Developing Representational Competence in Chemistry
ERIC Educational Resources Information Center
Padalkar, Shamin; Hegarty, Mary
2015-01-01
Spatial information in science is often expressed through representations such as diagrams and models. Learning the strengths and limitations of these representations and how to relate them are important aspects of developing scientific understanding, referred to as "representational competence." Diagram translation is particularly…
The Differential Role of Verbal and Spatial Working Memory in the Neural Basis of Arithmetic
Demir, Özlem Ece; Prado, Jérôme; Booth, James R.
2014-01-01
We examine the relations of verbal and spatial WM ability to the neural bases of arithmetic in school-age children. We independently localize brain regions subserving verbal versus spatial representations. For multiplication, higher verbal WM ability is associated with greater recruitment of the left temporal cortex, identified by the verbal localizer. For multiplication and subtraction, higher spatial WM ability is associated with greater recruitment of right parietal cortex, identified by the spatial localizer. Depending on their WM ability, children engage different neural systems that manipulate different representations to solve arithmetic problems. PMID:25144257
Milgram, Norton W; Head, E; Muggenburg, B; Holowachuk, D; Murphey, H; Estrada, J; Ikeda-Douglas, C J; Zicker, S C; Cotman, C W
2002-10-01
The landmark discrimination learning test can be used to assess the ability to utilize allocentric spatial information to locate targets. The present experiments examined the role of various factors on performance of a landmark discrimination learning task in beagle dogs. Experiments 1 and 2 looked at the effects of age and food composition. Experiments 3 and 4 were aimed at characterizing the cognitive strategies used in performance on this task and in long-term retention. Cognitively equivalent groups of old and young dogs were placed into either a test group maintained on food enriched with a broad-spectrum of antioxidants and mitochondrial cofactors, or a control group maintained on a complete and balanced food formulated for adult dogs. Following a wash-in period, the dogs were tested on a series of problems, in which reward was obtained when the animal responded selectively to the object closest to a thin wooden block, which served as a landmark. In Experiment 1, dogs were first trained to respond to a landmark placed directly on top of coaster, landmark 0 (L0). In the next phase of testing, the landmark was moved at successively greater distances (1, 4 or 10 cm) away from the reward object. Learning varied as a function of age group, food group, and task. The young dogs learned all of the tasks more quickly than the old dogs. The aged dogs on the enriched food learned L0 significantly more rapidly than aged dogs on control food. A higher proportion of dogs on the enriched food learned the task, when the distance was increased to 1cm. Experiment 2 showed that accuracy decreased with increased distance between the reward object and landmark, and this effect was greater in old animals. Experiment 3 showed stability of performance, despite using a novel landmark, and new locations, indicating that dogs learned the landmark concept. Experiment 4 found age impaired long-term retention of the landmark task. These results indicate that allocentric spatial learning is impaired in an age-dependent manner in dogs, and that age also affects performance when the distance between the landmark and target is increased. In addition, these results both support a role of oxidative damage in the development of age-associated cognitive dysfunction and indicate that short-term administration of a food enriched with supplemental antioxidants and mitochondrial cofactors can partially reverse the deleterious effects of aging on cognition.
Orienting attention to locations in internal representations.
Griffin, Ivan C; Nobre, Anna C
2003-11-15
Three experiments investigated whether it is possible to orient selective spatial attention to internal representations held in working memory in a similar fashion to orienting to perceptual stimuli. In the first experiment, subjects were either cued to orient to a spatial location before a stimulus array was presented (pre-cue), cued to orient to a spatial location in working memory after the array was presented (retro-cue), or given no cueing information (neutral cue). The stimulus array consisted of four differently colored crosses, one in each quadrant. At the end of a trial, a colored cross (probe) was presented centrally, and subjects responded according to whether it had occurred in the array. There were equivalent patterns of behavioral costs and benefits of cueing for both pre-cues and retro-cues. A follow-up experiment used a peripheral probe stimulus requiring a decision about whether its color matched that of the item presented at the same location in the array. Replication of the behavioral costs and benefits of pre-cues and retro-cues in this experiment ruled out changes in response criteria as the only explanation for the effects. The third experiment used event-related potentials (ERPs) to compare the neural processes involved in orienting attention to a spatial location in an external versus an internal spatial representation. In this task, subjects responded according to whether a central probe stimulus occurred at the cued location in the array. There were both similarities and differences between ERPs to spatial cues toward a perception versus an internal spatial representation. Lateralized early posterior and later frontal negativities were observed for both pre- and retro-cues. Retro-cues also showed additional neural processes to be involved in orienting to an internal representation, including early effects over frontal electrodes.
The Process of Probability Problem Solving: Use of External Visual Representations
ERIC Educational Resources Information Center
Zahner, Doris; Corter, James E.
2010-01-01
We investigate the role of external inscriptions, particularly those of a spatial or visual nature, in the solution of probability word problems. We define a taxonomy of external visual representations used in probability problem solving that includes "pictures," "spatial reorganization of the given information," "outcome listings," "contingency…
Spatial Representation of Pitch Height: The SMARC Effect
ERIC Educational Resources Information Center
Rusconi, Elena; Kwan, Bonnie; Giordano, Bruno L.; Umilta, Carlo; Butterworth, Brian
2006-01-01
Through the preferential pairing of response positions to pitch, here we show that the internal representation of pitch height is spatial in nature and affects performance, especially in musically trained participants, when response alternatives are either vertically or horizontally aligned. The finding that our cognitive system maps pitch height…
Spatial Representation in Blind Children. 3: Effects of Individual Differences.
ERIC Educational Resources Information Center
Fletcher, Janet F.
1981-01-01
Data from a study of spatial representation in blind children were subjected to two stepwise regression analyses to determine the relationships between several subject related variables and responses to "map" (cognitive map) and "route" (sequential memory) questions about the position of furniture in a recently explored room. (Author/SBH)
Landscape Interpretation with Augmented Reality and Maps to Improve Spatial Orientation Skill
ERIC Educational Resources Information Center
Carbonell Carrera, Carlos; Bermejo Asensio, Luis A.
2017-01-01
Landscape interpretation is needed for navigating and determining an orientation: with traditional cartography, interpreting 3D topographic information from 2D landform representations to get self-location requires spatial orientation skill. Augmented reality technology allows a new way to interact with 3D landscape representation and thereby…
NASA Astrophysics Data System (ADS)
Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor
2017-12-01
When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.
Recruitment of Hispanic Students into MIS Curricula
ERIC Educational Resources Information Center
McHaney, Roger; Martin, Dawne
2007-01-01
This paper provides several suggestions Hispanic student recruitment and retention in MIS or other business curricula. Cultural considerations like allocentrism and familialism are discussed along with the situation at K-State. It is believed that the recruitment and retention of Hispanic students can be influenced positively by considering…
Spatial versus Tree Representations of Proximity Data.
ERIC Educational Resources Information Center
Pruzansky, Sandra; And Others
1982-01-01
Two-dimensional euclidean planes and additive trees are two of the most common representations of proximity data for multidimensional scaling. Guidelines for comparing these representations and discovering properties that could help identify which representation is more appropriate for a given data set are presented. (Author/JKS)
Spatially variant morphological restoration and skeleton representation.
Bouaynaya, Nidhal; Charif-Chefchaouni, Mohammed; Schonfeld, Dan
2006-11-01
The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts.
Subject-level differences in reported locations of cutaneous tactile and nociceptive stimuli
Steenbergen, Peter; Buitenweg, Jan R.; Trojan, Jörg; Klaassen, Bart; Veltink, Peter H.
2012-01-01
Recent theoretical advances on the topic of body representations have raised the question whether spatial perception of touch and nociception involve the same representations. Various authors have established that subjective localizations of touch and nociception are displaced in a systematic manner. The relation between veridical stimulus locations and localizations can be described in the form of a perceptual map; these maps differ between subjects. Recently, evidence was found for a common set of body representations to underlie spatial perception of touch and slow and fast pain, which receive information from modality specific primary representations. There are neurophysiological clues that the various cutaneous senses may not share the same primary representation. If this is the case, then differences in primary representations between touch and nociception may cause subject-dependent differences in perceptual maps of these modalities. We studied localization of tactile and nociceptive sensations on the forearm using electrocutaneous stimulation. The perceptual maps of these modalities differed at the group level. When assessed for individual subjects, the differences localization varied in nature between subjects. The agreement of perceptual maps of the two modalities was moderate. These findings are consistent with a common internal body representation underlying spatial perception of touch and nociception. The subject level differences suggest that in addition to these representations other aspects, possibly differences in primary representation and/or the influence of stimulus parameters, lead to differences in perceptual maps in individuals. PMID:23226126
Commonalities between Perception and Cognition.
Tacca, Michela C
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity - a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman's Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition.
Commonalities between Perception and Cognition
Tacca, Michela C.
2011-01-01
Perception and cognition are highly interrelated. Given the influence that these systems exert on one another, it is important to explain how perceptual representations and cognitive representations interact. In this paper, I analyze the similarities between visual perceptual representations and cognitive representations in terms of their structural properties and content. Specifically, I argue that the spatial structure underlying visual object representation displays systematicity – a property that is considered to be characteristic of propositional cognitive representations. To this end, I propose a logical characterization of visual feature binding as described by Treisman’s Feature Integration Theory and argue that systematicity is not only a property of language-like representations, but also of spatially organized visual representations. Furthermore, I argue that if systematicity is taken to be a criterion to distinguish between conceptual and non-conceptual representations, then visual representations, that display systematicity, might count as an early type of conceptual representations. Showing these analogies between visual perception and cognition is an important step toward understanding the interface between the two systems. The ideas here presented might also set the stage for new empirical studies that directly compare binding (and other relational operations) in visual perception and higher cognition. PMID:22144974
Clark, Benjamin J; Harvey, Ryan E
2016-09-01
The anterior and lateral thalamus has long been considered to play an important role in spatial and mnemonic cognitive functions; however, it remains unclear whether each region makes a unique contribution to spatial information processing. We begin by reviewing evidence from anatomical studies and electrophysiological recordings which suggest that at least one of the functions of the anterior thalamus is to guide spatial orientation in relation to a global or distal spatial framework, while the lateral thalamus serves to guide behavior in relation to a local or proximal framework. We conclude by reviewing experimental work using targeted manipulations (lesion or neuronal silencing) of thalamic nuclei during spatial behavior and single-unit recordings from neuronal representations of space. Our summary of this literature suggests that although the evidence strongly supports a working model of spatial information processing involving the anterior thalamus, research regarding the role of the lateral thalamus is limited and requires further attention. We therefore identify a number of major gaps in this research and suggest avenues of future study that could potentially solidify our understanding of the relative roles of anterior and lateral thalamic regions in spatial representation and memory. Copyright © 2016 Elsevier Inc. All rights reserved.
De Sá Teixeira, Nuno Alexandre
2014-12-01
Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.
Auclair, Laurent; Jambaqué, Isabelle
2015-01-01
This study addresses the relation between lexico-semantic body knowledge (i.e., body semantics) and spatial body representation (i.e., structural body representation) by analyzing naming performances as a function of body structural topography. One hundred and forty-one children ranging from 5 years 2 months to 10 years 5 months old were asked to provide a lexical label for isolated body part pictures. We compared the children's naming performances according to the location of the body parts (body parts vs. head features and also upper vs. lower limbs) or to their involvement in motor skills (distal segments, joints, and broader body parts). The results showed that the children's naming performance was better for facial body parts than for other body parts. Furthermore, it was found that the naming of body parts was better for body parts related to action. These findings suggest that the development of a spatial body representation shapes the elaboration of semantic body representation processing. Moreover, this influence was not limited to younger children. In our discussion of these results, we focus on the important role of action in the development of body representations and semantic organization.
In (or outside of) your neck of the woods: laterality in spatial body representation
Hach, Sylvia; Schütz-Bosbach, Simone
2014-01-01
Beside language, space is to date the most widely recognized lateralized systems. For example, it has been shown that even mental representations of space and the spatial representation of abstract concepts display lateralized characteristics. For the most part, this body of literature describes space as distal or something outside of the observer or actor. What has been strangely absent in the literature on the whole and specifically in the spatial literature until recently is the most proximal space imaginable – the body. In this review, we will summarize three strands of literature showing laterality in body representations. First, evidence of hemispheric asymmetries in body space in health and, second in body space in disease will be examined. Third, studies pointing to differential contributions of the right and left hemisphere to illusory body (space) will be summarized. Together these studies show hemispheric asymmetries to be evident in body representations at the level of simple somatosensory and proprioceptive representations. We propose a novel working hypothesis, whereby neural systems dedicated to processing action-oriented information about one’s own body space may ontogenetically serve as a template for the perception of the external world. PMID:24600421
The Role of Visual Experience on the Representation and Updating of Novel Haptic Scenes
ERIC Educational Resources Information Center
Pasqualotto, Achille; Newell, Fiona N.
2007-01-01
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to…
The U.S. EPA’s National Aquatic Resource Surveys (NARS) require a consistent spatial representation of the resource target populations being monitored (i.e., rivers and streams, lakes, coastal waters, and wetlands). A sample frame is the GIS representation of this target popula...
Qualitative Differences in the Representation of Spatial Relations for Different Object Classes
ERIC Educational Resources Information Center
Cooper, Eric E.; Brooks, Brian E.
2004-01-01
Two experiments investigated whether the representations used for animal, produce, and object recognition code spatial relations in a similar manner. Experiment 1 tested the effects of planar rotation on the recognition of animals and nonanimal objects. Response times for recognizing animals followed an inverted U-shaped function, whereas those…
Effects of Spatial Cueing on Representational Momentum
ERIC Educational Resources Information Center
Hubbard, Timothy L.; Kumar, Anuradha Mohan; Carp, Charlotte L.
2009-01-01
Effects of a spatial cue on representational momentum were examined. If a cue was present during or after target motion and indicated the location at which the target would vanish or had vanished, forward displacement of that target decreased. The decrease in forward displacement was larger when cues were present after target motion than when cues…
Gurunathan, Rajalakshmi; Van Emden, Bernard; Panchanathan, Sethuraman; Kumar, Sudhir
2004-01-01
Background Modern developmental biology relies heavily on the analysis of embryonic gene expression patterns. Investigators manually inspect hundreds or thousands of expression patterns to identify those that are spatially similar and to ultimately infer potential gene interactions. However, the rapid accumulation of gene expression pattern data over the last two decades, facilitated by high-throughput techniques, has produced a need for the development of efficient approaches for direct comparison of images, rather than their textual descriptions, to identify spatially similar expression patterns. Results The effectiveness of the Binary Feature Vector (BFV) and Invariant Moment Vector (IMV) based digital representations of the gene expression patterns in finding biologically meaningful patterns was compared for a small (226 images) and a large (1819 images) dataset. For each dataset, an ordered list of images, with respect to a query image, was generated to identify overlapping and similar gene expression patterns, in a manner comparable to what a developmental biologist might do. The results showed that the BFV representation consistently outperforms the IMV representation in finding biologically meaningful matches when spatial overlap of the gene expression pattern and the genes involved are considered. Furthermore, we explored the value of conducting image-content based searches in a dataset where individual expression components (or domains) of multi-domain expression patterns were also included separately. We found that this technique improves performance of both IMV and BFV based searches. Conclusions We conclude that the BFV representation consistently produces a more extensive and better list of biologically useful patterns than the IMV representation. The high quality of results obtained scales well as the search database becomes larger, which encourages efforts to build automated image query and retrieval systems for spatial gene expression patterns. PMID:15603586
Lambrey, Simon; Berthoz, Alain
2007-09-01
Numerous data in the literature provide evidence for gender differences in spatial orientation. In particular, it has been suggested that spatial representations of large-scale environments are more accurate in terms of metric information in men than in women but are richer in landmark information in women than in men. One explanatory hypothesis is that men and women differ in terms of navigational processes they used in daily life. The present study investigated this hypothesis by distinguishing two navigational processes: spatial updating by self-motion and landmark-based orientation. Subjects were asked to perform a pointing task in three experimental conditions, which differed in terms of reliability of the external landmarks that could be used. Two groups of subjects were distinguished, a mobile group and an immobile group, in which spatial updating of environmental locations did not have the same degree of importance for the correct performance of the pointing task. We found that men readily relied on an internal egocentric representation of where landmarks were expected to be in order to perform the pointing task, a representation that could be updated during self-motion (spatial updating). In contrast, women seemed to take their bearings more readily on the basis of the stable landmarks of the external world. We suggest that this gender difference in spatial orientation is not due to differences in information processing abilities but rather due to the differences in higher level strategies.
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Drummond, Leslie; Shomstein, Sarah
2013-01-01
The relative contributions of objects (i.e., object-based) and underlying spatial (i.e., space-based representations) to attentional prioritization and selection remain unclear. In most experimental circumstances, the two representations overlap thus their respective contributions cannot be evaluated. Here, a dynamic version of the two-rectangle paradigm allowed for a successful de-coupling of spatial and object representations. Space-based (cued spatial location), cued end of the object, and object-based (locations within the cued object) effects were sampled at several timepoints following the cue with high or low certainty as to target location. In the high uncertainty condition spatial benefits prevailed throughout most of the timecourse, as evidenced by facilitatory and inhibitory effects. Additionally, the cued end of the object, rather than a whole object, received the attentional benefit. When target location was predictable (low uncertainty manipulation), only probabilities guided selection (i.e., evidence by a benefit for the statistically biased location). These results suggest that with high spatial uncertainty, all available information present within the stimulus display is used for the purposes of attentional selection (e.g., spatial locations, cued end of the object) albeit to varying degrees and at different time points. However, as certainty increases, only spatial certainty guides selection (i.e., object ends and whole objects are filtered out). Taken together, these results further elucidate the contributing role of space- and object-representations to attentional guidance. PMID:24367302
Struiksma, Marijn E.; Noordzij, Matthijs L.; Neggers, Sebastiaan F. W.; Bosker, Wendy M.; Postma, Albert
2011-01-01
Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations. PMID:21935391
Retrieving Enduring Spatial Representations after Disorientation
Li, Xiaoou; Mou, Weimin; McNamara, Timothy P.
2012-01-01
Four experiments tested whether there are enduring spatial representations of objects’ locations in memory. Previous studies have shown that under certain conditions the internal consistency of pointing to objects using memory is disrupted by disorientation. This disorientation effect has been attributed to an absence of or to imprecise enduring spatial representations of objects’ locations. Experiment 1 replicated the standard disorientation effect. Participants learned locations of objects in an irregular layout and then pointed to objects after physically turning to face an object and after disorientation. The expected disorientation was observed. In Experiment 2, after disorientation, participants were asked to imagine they were facing the original learning direction and then physically turned to adopt the test orientation. In Experiment 3, after disorientation, participants turned to adopt the test orientation and then were informed of the original viewing direction by the experimenter. A disorientation effect was not observed in Experiment 2 or 3. In Experiment 4, after disorientation, participants turned to face the test orientation but were not told the original learning orientation. As in Experiment 1, a disorientation effect was observed. These results suggest that there are enduring spatial representations of objects’ locations specified in terms of a spatial reference direction parallel to the learning view, and that the disorientation effect is caused by uncertainty in recovering the spatial reference direction relative to the testing orientation following disorientation. PMID:22682765
Professional mathematicians differ from controls in their spatial-numerical associations.
Cipora, Krzysztof; Hohol, Mateusz; Nuerk, Hans-Christoph; Willmes, Klaus; Brożek, Bartosz; Kucharzyk, Bartłomiej; Nęcka, Edward
2016-07-01
While mathematically impaired individuals have been shown to have deficits in all kinds of basic numerical representations, among them spatial-numerical associations, little is known about individuals with exceptionally high math expertise. They might have a more abstract magnitude representation or more flexible spatial associations, so that no automatic left/small and right/large spatial-numerical association is elicited. To pursue this question, we examined the Spatial Numerical Association of Response Codes (SNARC) effect in professional mathematicians which was compared to two control groups: Professionals who use advanced math in their work but are not mathematicians (mostly engineers), and matched controls. Contrarily to both control groups, Mathematicians did not reveal a SNARC effect. The group differences could not be accounted for by differences in mean response speed, response variance or intelligence or a general tendency not to show spatial-numerical associations. We propose that professional mathematicians possess more abstract and/or spatially very flexible numerical representations and therefore do not exhibit or do have a largely reduced default left-to-right spatial-numerical orientation as indexed by the SNARC effect, but we also discuss other possible accounts. We argue that this comparison with professional mathematicians also tells us about the nature of spatial-numerical associations in persons with much less mathematical expertise or knowledge.
Sharp wave ripples during learning stabilize hippocampal spatial map
Roux, Lisa; Hu, Bo; Eichler, Ronny; Stark, Eran; Buzsáki, György
2017-01-01
Cognitive representation of the environment requires a stable hippocampal map but the mechanisms maintaining map representation are unknown. Because sharp wave-ripples (SPW-R) orchestrate both retrospective and prospective spatial information, we hypothesized that disrupting neuronal activity during SPW-Rs affects spatial representation. Mice learned daily a new set of three goal locations on a multi-well maze. We used closed-loop SPW-R detection at goal locations to trigger optogenetic silencing of a subset of CA1 pyramidal neurons. Control place cells (non-silenced or silenced outside SPW-Rs) largely maintained the location of their place fields after learning and showed increased spatial information content. In contrast, the place fields of SPW-R-silenced place cells remapped, and their spatial information remained unaltered. SPW-R silencing did not impact the firing rates or the proportions of place cells. These results suggest that interference with SPW-R-associated activity during learning prevents the stabilization and refinement of the hippocampal map. PMID:28394323
Louwerse, Max M; Benesh, Nick
2012-01-01
Spatial mental representations can be derived from linguistic and non-linguistic sources of information. This study tested whether these representations could be formed from statistical linguistic frequencies of city names, and to what extent participants differed in their performance when they estimated spatial locations from language or maps. In a computational linguistic study, we demonstrated that co-occurrences of cities in Tolkien's Lord of the Rings trilogy and The Hobbit predicted the authentic longitude and latitude of those cities in Middle Earth. In a human study, we showed that human spatial estimates of the location of cities were very similar regardless of whether participants read Tolkien's texts or memorized a map of Middle Earth. However, text-based location estimates obtained from statistical linguistic frequencies better predicted the human text-based estimates than the human map-based estimates. These findings suggest that language encodes spatial structure of cities, and that human cognitive map representations can come from implicit statistical linguistic patterns, from explicit non-linguistic perceptual information, or from both. Copyright © 2012 Cognitive Science Society, Inc.
The parietal cortex in sensemaking: the dissociation of multiple types of spatial information.
Sun, Yanlong; Wang, Hongbin
2013-01-01
According to the data-frame theory, sensemaking is a macrocognitive process in which people try to make sense of or explain their observations by processing a number of explanatory structures called frames until the observations and frames become congruent. During the sensemaking process, the parietal cortex has been implicated in various cognitive tasks for the functions related to spatial and temporal information processing, mathematical thinking, and spatial attention. In particular, the parietal cortex plays important roles by extracting multiple representations of magnitudes at the early stages of perceptual analysis. By a series of neural network simulations, we demonstrate that the dissociation of different types of spatial information can start early with a rather similar structure (i.e., sensitivity on a common metric), but accurate representations require specific goal-directed top-down controls due to the interference in selective attention. Our results suggest that the roles of the parietal cortex rely on the hierarchical organization of multiple spatial representations and their interactions. The dissociation and interference between different types of spatial information are essentially the result of the competition at different levels of abstraction.
The Parietal Cortex in Sensemaking: The Dissociation of Multiple Types of Spatial Information
Sun, Yanlong; Wang, Hongbin
2013-01-01
According to the data-frame theory, sensemaking is a macrocognitive process in which people try to make sense of or explain their observations by processing a number of explanatory structures called frames until the observations and frames become congruent. During the sensemaking process, the parietal cortex has been implicated in various cognitive tasks for the functions related to spatial and temporal information processing, mathematical thinking, and spatial attention. In particular, the parietal cortex plays important roles by extracting multiple representations of magnitudes at the early stages of perceptual analysis. By a series of neural network simulations, we demonstrate that the dissociation of different types of spatial information can start early with a rather similar structure (i.e., sensitivity on a common metric), but accurate representations require specific goal-directed top-down controls due to the interference in selective attention. Our results suggest that the roles of the parietal cortex rely on the hierarchical organization of multiple spatial representations and their interactions. The dissociation and interference between different types of spatial information are essentially the result of the competition at different levels of abstraction. PMID:23710165
NASA Astrophysics Data System (ADS)
Nijssen, Bart; Clark, Martyn; Mizukami, Naoki; Chegwidden, Oriana
2016-04-01
Most existing hydrological models use a fixed representation of landscape structure. For example, high-resolution, spatially-distributed models may use grid cells that exchange moisture through the saturated subsurface or may divide the landscape into hydrologic response units that only exchange moisture through surface channels. Alternatively, many regional models represent the landscape through coarse elements that do not model any moisture exchange between these model elements. These spatial organizations are often represented at a low-level in the model code and its data structures, which makes it difficult to evaluate different landscape representations using the same hydrological model. Instead, such experimentation requires the use of multiple, different hydrological models, which in turn complicates the analysis, because differences in model outcomes are no longer constrained by differing spatial representations. This inflexibility in the representation of landscape structure also limits a model's capability for scaling local processes to regional outcomes. In this study, we used the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to evaluate different model spatial configurations to represent landscape structure and to evaluate scaling behavior. SUMMA can represent the moisture exchange between arbitrarily shaped landscape elements in a number of different ways, while using the same model parameterizations for vertical fluxes. This allows us to isolate the effects of changes in landscape representations on modeled hydrological fluxes and states. We examine the effects of spatial configuration in Reynolds Creek, Idaho, USA, which is a research watershed with gaged areas from 1-20 km2. We then use the same modeling system to evaluate scaling behavior in simulated hydrological fluxes in the Columbia River Basin, Pacific Northwest, USA. This basin drains more than 500,000 km2 and includes the Reynolds Creek Watershed.
ERIC Educational Resources Information Center
Jian, Yu-Cin; Wu, Chao-Jung
2015-01-01
We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our…
ERIC Educational Resources Information Center
Demir, Özlem Ece; Prado, Jérôme; Booth, James R.
2015-01-01
We examined the relation of parental socioeconomic status (SES) to the neural bases of subtraction in school-age children (9- to 12-year-olds). We independently localized brain regions subserving verbal versus visuo-spatial representations to determine whether the parental SES-related differences in children's reliance on these neural…
Audio Motor Training at the Foot Level Improves Space Representation.
Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica
2017-01-01
Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body.
Audio Motor Training at the Foot Level Improves Space Representation
Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica
2017-01-01
Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body. PMID:29326564
Unbounding the mental number line—new evidence on children's spatial representation of numbers
Link, Tanja; Huber, Stefan; Nuerk, Hans-Christoph; Moeller, Korbinian
2014-01-01
Number line estimation (i.e., indicating the position of a given number on a physical line) is a standard assessment of children's spatial representation of number magnitude. Importantly, there is an ongoing debate on the question in how far the bounded task version with start and endpoint given (e.g., 0 and 100) might induce specific estimation strategies and thus may not allow for unbiased inferences on the underlying representation. Recently, a new unbounded version of the task was suggested with only the start point and a unit fixed (e.g., the distance from 0 to 1). In adults this task provided a less biased index of the spatial representation of number magnitude. Yet, so far there are no children data available for the unbounded number line estimation task. Therefore, we conducted a cross-sectional study on primary school children performing both, the bounded and the unbounded version of the task. We observed clear evidence for systematic strategic influences (i.e., the consideration of reference points) in the bounded number line estimation task for children older than grade two whereas there were no such indications for the unbounded version for any one of the age groups. In summary, the current data corroborate the unbounded number line estimation task to be a valuable tool for assessing children's spatial representation of number magnitude in a systematic and unbiased manner. Yet, similar results for the bounded and the unbounded version of the task for first- and second-graders may indicate that both versions of the task might assess the same underlying representation for relatively younger children—at least in number ranges familiar to the children assessed. This is of particular importance for inferences about the nature and development of children's magnitude representation. PMID:24478734
Using graph approach for managing connectivity in integrative landscape modelling
NASA Astrophysics Data System (ADS)
Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger
2013-04-01
In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). OpenFLUID-landr library has been developed in order i) to be used with no GIS expert skills needed (common gis formats can be read and simplified spatial management is provided), ii) to easily develop adapted rules of landscape discretization and graph creation to follow spatialized model requirements and iii) to allow model developers to manage dynamic and complex spatial topology. Graph management in OpenFLUID are shown with i) examples of hydrological modelizations on complex farmed landscapes and ii) the new implementation of Geo-MHYDAS tool based on the OpenFLUID-landr library, which allows to discretize a landscape and create graph structure for the MHYDAS model requirements.
ERIC Educational Resources Information Center
Frischen, Alexandra; Loach, Daniel; Tipper, Steven P.
2009-01-01
Selective attention is usually considered an egocentric mechanism, biasing sensory information based on its behavioural relevance to oneself. This study provides evidence for an equivalent allocentric mechanism that allows passive observers to selectively attend to information from the perspective of another person. In a negative priming task,…
Children's Use of Allocentric Cues in Visually- and Memory-Guided Reach Space
ERIC Educational Resources Information Center
Cordova, Alberto; Gabbard, Carl
2012-01-01
Theory suggests that the vision-for-perception and vision-for-action processing streams operate under very different temporal constraints (Glover, 2004; Goodale, Jackobson, & Keillor, 1994; Graham, Bradshaw, & Davis, 1998; Hu, Eagleson, & Goodale, 1999). With the present study, children and young adults were asked to estimate how far a cued target…
Development of Visuospatial Attention in Typically Developing Children
Ickx, Gaétan; Bleyenheuft, Yannick; Hatem, Samar M.
2017-01-01
The aim of the present study is to investigate the development of visuospatial attention in typically developing children and to propose reference values for children for the following six visuospatial attention tests: star cancellation, Ogden figure, reading test, line bisection, proprioceptive pointing and visuo-proprioceptive pointing. Data of 159 children attending primary or secondary school in the Fédération Wallonie Bruxelles (Belgium) were analyzed. Results showed that the children's performance on star cancellation, Ogden figure and reading test improved until the age of 13 years, whereas their performance on proprioceptive pointing, visuo-proprioceptive pointing and line bisection was stable with increasing age. These results suggest that the execution of different types of visuospatial attention tasks are not following the same developmental trajectories. This dissociation is strengthened by the lack of correlation observed between tests assessing egocentric and allocentric visuospatial attention, except for the star cancellation test (egocentric) and the Ogden figure copy (ego- and allocentric). Reference values are proposed that may be useful to examine children with clinical disorders of visuospatial attention. PMID:29270138
Autonomy and interdependence: beliefs of Brazilian mothers from state capitals and small towns.
Vieira, Mauro Luis; Seidl-de-Moura, Maria Lucia; Macarini, Samira Mafioletti; Martins, Gabriela Dal Forno; Lordelo, Eulina da Rocha; Tokumaru, Rosana Suemi; Oliva, Angela Donate
2010-11-01
This study aimed to investigate characteristics of Brazilian mothers' beliefs system, in the dimensions of autonomy and interdependence. A group of 600 women, half from state capitals and half from small towns, participated in the study. They were individually interviewed with Scales of Allocentrism, Beliefs about Parental Practices and Socialization Goals. Paired and Independent samples t tests and Multivariate GLM were performed. The results indicate that although mothers from both contexts value autonomy, mothers inhabiting small towns considered the relational dimension as the most important; whereas mothers inhabiting capitals valued equally both dimensions, either in their beliefs about practices or in the socialization goals for their children. Mothers from small towns have a higher mean score for allocentrism than mothers living in capitals. Thus, place of residence proved to be a relevant variable in the modulation of maternal beliefs. Educational level was not a significant factor in the variables considered and with this group of mothers. The study results are discussed in terms of their contribution to the understanding of the complex relationship between dimensions of autonomy and interdependence in mothers' beliefs system.
Embedded Data Representations.
Willett, Wesley; Jansen, Yvonne; Dragicevic, Pierre
2017-01-01
We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles are making it increasingly easier to display data in-context. While researchers and artists have already begun to create embedded data representations, the benefits, trade-offs, and even the language necessary to describe and compare these approaches remain unexplored. In this paper, we formalize the notion of physical data referents - the real-world entities and spaces to which data corresponds - and examine the relationship between referents and the visual and physical representations of their data. We differentiate situated representations, which display data in proximity to data referents, and embedded representations, which display data so that it spatially coincides with data referents. Drawing on examples from visualization, ubiquitous computing, and art, we explore the role of spatial indirection, scale, and interaction for embedded representations. We also examine the tradeoffs between non-situated, situated, and embedded data displays, including both visualizations and physicalizations. Based on our observations, we identify a variety of design challenges for embedded data representation, and suggest opportunities for future research and applications.
Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.
Bouaynaya, Nidhal; Schonfeld, Dan
2008-05-01
In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.
Fuhrman, Orly; Boroditsky, Lera
2010-11-01
Across cultures people construct spatial representations of time. However, the particular spatial layouts created to represent time may differ across cultures. This paper examines whether people automatically access and use culturally specific spatial representations when reasoning about time. In Experiment 1, we asked Hebrew and English speakers to arrange pictures depicting temporal sequences of natural events, and to point to the hypothesized location of events relative to a reference point. In both tasks, English speakers (who read left to right) arranged temporal sequences to progress from left to right, whereas Hebrew speakers (who read right to left) arranged them from right to left, replicating previous work. In Experiments 2 and 3, we asked the participants to make rapid temporal order judgments about pairs of pictures presented one after the other (i.e., to decide whether the second picture showed a conceptually earlier or later time-point of an event than the first picture). Participants made responses using two adjacent keyboard keys. English speakers were faster to make "earlier" judgments when the "earlier" response needed to be made with the left response key than with the right response key. Hebrew speakers showed exactly the reverse pattern. Asking participants to use a space-time mapping inconsistent with the one suggested by writing direction in their language created interference, suggesting that participants were automatically creating writing-direction consistent spatial representations in the course of their normal temporal reasoning. It appears that people automatically access culturally specific spatial representations when making temporal judgments even in nonlinguistic tasks. Copyright © 2010 Cognitive Science Society, Inc.
Human short-term spatial memory: precision predicts capacity.
Banta Lavenex, Pamela; Boujon, Valérie; Ndarugendamwo, Angélique; Lavenex, Pierre
2015-03-01
Here, we aimed to determine the capacity of human short-term memory for allocentric spatial information in a real-world setting. Young adults were tested on their ability to learn, on a trial-unique basis, and remember over a 1-min interval the location(s) of 1, 3, 5, or 7 illuminating pads, among 23 pads distributed in a 4m×4m arena surrounded by curtains on three sides. Participants had to walk to and touch the pads with their foot to illuminate the goal locations. In contrast to the predictions from classical slot models of working memory capacity limited to a fixed number of items, i.e., Miller's magical number 7 or Cowan's magical number 4, we found that the number of visited locations to find the goals was consistently about 1.6 times the number of goals, whereas the number of correct choices before erring and the number of errorless trials varied with memory load even when memory load was below the hypothetical memory capacity. In contrast to resource models of visual working memory, we found no evidence that memory resources were evenly distributed among unlimited numbers of items to be remembered. Instead, we found that memory for even one individual location was imprecise, and that memory performance for one location could be used to predict memory performance for multiple locations. Our findings are consistent with a theoretical model suggesting that the precision of the memory for individual locations might determine the capacity of human short-term memory for spatial information. Copyright © 2015 Elsevier Inc. All rights reserved.
Spatial Data Structures for Robotic Vehicle Route Planning
1988-12-01
goal will be realized in an intelligent Spatial Data Structure Development System (SDSDS) intended for use by Terrain Analysis applications...from the user the details of representation and to permit the infrastructure itself to decide which representations will be most efficient or effective ...to intelligently predict performance of algorithmic sequences and thereby optimize the application (within the accuracy of the prediction models). The
Balanced Cortical Microcircuitry for Spatial Working Memory Based on Corrective Feedback Control
2014-01-01
A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory–inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. PMID:24828633
Aoki, Hirofumi; Ohno, Ryuzo; Yamaguchi, Takao
2005-01-01
In a virtual weightless environment, subjects' orientation skills were studied to examine what kind of cognitive errors people make when they moved through the interior space of virtual space stations and what kind of visual information effectively decreases those errors. Subjects wearing a head-mounted display moved from one end to the other end in space station-like routes constructed of rectangular and cubical modules, and did Pointing and Modeling tasks. In Experiment 1, configurations of the routes were changed with such variables as the number of bends, the number of embedding planes, and the number of planes with respect to the body posture. The results indicated that spatial orientation ability was relevant to the variables and that orientational errors were explained by two causes. One of these was that the place, the direction, and the sequence of turns were incorrect. The other was that subjects did not recognize the rotation of the frame of reference, especially when they turned in pitch direction rather than in yaw. In Experiment 2, the effect of the interior design was examined by testing three design settings. Wall colors that showed the allocentric frame of reference and the different interior design of vertical and horizontal modules were effective; however, there was a limit to the effectiveness in complicated configurations. c2005 Published by Elsevier Ltd.
Dissociation of spatial memory systems in Williams syndrome.
Bostelmann, Mathilde; Fragnière, Emilie; Costanzo, Floriana; Di Vara, Silvia; Menghini, Deny; Vicari, Stefano; Lavenex, Pierre; Lavenex, Pamela Banta
2017-11-01
Williams syndrome (WS), a genetic deletion syndrome, is characterized by severe visuospatial deficits affecting performance on both tabletop spatial tasks and on tasks which assess orientation and navigation. Nevertheless, previous studies of WS spatial capacities have ignored the fact that two different spatial memory systems are believed to contribute parallel spatial representations supporting navigation. The place learning system depends on the hippocampal formation and creates flexible relational representations of the environment, also known as cognitive maps. The spatial response learning system depends on the striatum and creates fixed stimulus-response representations, also known as habits. Indeed, no study assessing WS spatial competence has used tasks which selectively target these two spatial memory systems. Here, we report that individuals with WS exhibit a dissociation in their spatial abilities subserved by these two memory systems. As compared to typically developing (TD) children in the same mental age range, place learning performance was impaired in individuals with WS. In contrast, their spatial response learning performance was facilitated. Our findings in individuals with WS and TD children suggest that place learning and response learning interact competitively to control the behavioral strategies normally used to support human spatial navigation. Our findings further suggest that the neural pathways supporting place learning may be affected by the genetic deletion that characterizes WS, whereas those supporting response learning may be relatively preserved. The dissociation observed between these two spatial memory systems provides a coherent theoretical framework to characterize the spatial abilities of individuals with WS, and may lead to the development of new learning strategies based on their facilitated response learning abilities. © 2017 Wiley Periodicals, Inc.
Fini, C; Brass, M; Committeri, G
2015-01-01
Space perception depends on our motion potentialities and our intended actions are affected by space perception. Research on peripersonal space (the space in reaching distance) shows that we perceive an object as being closer when we (Witt, Proffitt, & Epstein, 2005; Witt & Proffitt, 2008) or another actor (Costantini, Ambrosini, Sinigaglia, & Gallese, 2011; Bloesch, Davoli, Roth, Brockmole, & Abrams, 2012) can interact with it. Similarly, an object only triggers specific movements when it is placed in our peripersonal space (Costantini, Ambrosini, Tieri, Sinigaglia, & Committeri, 2010) or in the other's peripersonal space (Costantini, Committeri, & Sinigaglia, 2011; Cardellicchio, Sinigaglia, & Costantini, 2013). Moreover, also the extrapersonal space (the space outside reaching distance) seems to be perceived in relation to our movement capabilities: the more effort it takes to cover a distance, the greater we perceive the distance to be (Proffitt, Stefanucci, Banton, & Epstein, 2003; Sugovic & Witt, 2013). However, not much is known about the influence of the other's movement potentialities on our extrapersonal space perception. Three experiments were carried out investigating the categorization of distance in extrapersonal space using human or non-human allocentric reference frames (RF). Subjects were asked to judge the distance ("Near" or "Far") of a target object (a beach umbrella) placed at progressively increasing or decreasing distances until a change from near to far or vice versa was reported. In the first experiment we found a significant "Near space extension" when the allocentric RF was a human virtual agent instead of a static, inanimate object. In the second experiment we tested whether the "Near space extension" depended on the anatomical structure of the RF or its movement potentialities by adding a wooden dummy. The "Near space extension" was only observed for the human agent but not for the dummy. Finally, to rule out the possibility that the effect was simply due to a line-of-sight mechanism (visual perspective taking) we compared the human agent free to move with the same agent tied to a pole with a rope, thus reducing movement potentialities while maintaining equal visual accessibility. The "Near space extension" disappeared when this manipulation was introduced, showing that movement potentialities are the relevant factor for such an effect. Our results demonstrate for the first time that during allocentric distance judgments within extrapersonal space, we implicitly process the movement potentialities of the RF. A target object is perceived as being closer when the allocentric RF is a human with available movement potentialities, suggesting a mechanism of social scaling of extrapersonal space processing. Copyright © 2014. Published by Elsevier B.V.
Spatial representations elicit dual-coding effects in mental imagery.
Verges, Michelle; Duffy, Sean
2009-08-01
Spatial aspects of words are associated with their canonical locations in the real world. Yet little research has tested whether spatial associations denoted in language comprehension generalize to their corresponding images. We directly tested the spatial aspects of mental imagery in picture and word processing (Experiment 1). We also tested whether spatial representations of motion words produce similar perceptual-interference effects as demonstrated by object words (Experiment 2). Findings revealed that words denoting an upward spatial location produced slower responses to targets appearing at the top of the display, whereas words denoting a downward spatial location produced slower responses to targets appearing at the bottom of the display. Perceptual-interference effects did not obtain for pictures or for words lacking a spatial relation. These findings provide greater empirical support for the perceptual-symbols system theory (Barsalou, 1999, 2008). Copyright © 2009 Cognitive Science Society, Inc.
Literacy shapes thought: the case of event representation in different cultures
Dobel, Christian; Enriquez-Geppert, Stefanie; Zwitserlood, Pienie; Bölte, Jens
2013-01-01
There has been a lively debate whether conceptual representations of actions or scenes follow a left-to-right spatial transient when participants depict such events or scenes. It was even suggested that conceptualizing the agent on the left side represents a universal. We review the current literature with an emphasis on event representation and on cross-cultural studies. While there is quite some evidence for spatial bias for representations of events and scenes in diverse cultures, their extent and direction depend on task demands, one‘s native language, and importantly, on reading and writing direction. Whether transients arise only in subject-verb-object languages, due to their linear sentential position of event participants, is still an open issue. We investigated a group of illiterate speakers of Yucatec Maya, a language with a predominant verb-object-subject structure. They were compared to illiterate native speakers of Spanish. Neither group displayed a spatial transient. Given the current literature, we argue that learning to read and write has a strong impact on representations of actions and scenes. Thus, while it is still under debate whether language shapes thought, there is firm evidence that literacy does. PMID:24795665
Asymmetric coding of categorical spatial relations in both language and vision.
Roth, J C; Franconeri, S L
2012-01-01
Describing certain types of spatial relationships between a pair of objects requires that the objects are assigned different "roles" in the relation, e.g., "A is above B" is different than "B is above A." This asymmetric representation places one object in the "target" or "figure" role and the other in the "reference" or "ground" role. Here we provide evidence that this asymmetry may be present not just in spatial language, but also in perceptual representations. More specifically, we describe a model of visual spatial relationship judgment where the designation of the target object within such a spatial relationship is guided by the location of the "spotlight" of attention. To demonstrate the existence of this perceptual asymmetry, we cued attention to one object within a pair by briefly previewing it, and showed that participants were faster to verify the depicted relation when that object was the linguistic target. Experiment 1 demonstrated this effect for left-right relations, and Experiment 2 for above-below relations. These results join several other types of demonstrations in suggesting that perceptual representations of some spatial relations may be asymmetrically coded, and further suggest that the location of selective attention may serve as the mechanism that guides this asymmetry.
Spatial representations in blind people: the role of strategies and mobility skills.
Schmidt, Susanna; Tinti, Carla; Fantino, Micaela; Mammarella, Irene C; Cornoldi, Cesare
2013-01-01
The role of vision in the construction of spatial representations has been the object of numerous studies and heated debate. The core question of whether visual experience is necessary to form spatial representations has found different, often contradictory answers. The present paper examines mental images generated from verbal descriptions of spatial environments. Previous evidence had shown that blind individuals have difficulty remembering information about spatial environments. By testing a group of congenitally blind people, we replicated this result and found that it is also present when the overall mental model of the environment is assessed. This was not always the case, however, but appeared to correlate with some blind participants' lower use of a mental imagery strategy and preference for a verbal rehearsal strategy, which was adopted particularly by blind people with more limited mobility skills. The more independent blind people who used a mental imagery strategy performed as well as sighted participants, suggesting that the difficulty blind people may have in processing spatial descriptions is not due to the absence of vision per se, but could be the consequence of both, their using less efficient verbal strategies and having poor mobility skills. Copyright © 2012 Elsevier B.V. All rights reserved.
2017-01-01
Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794
The link between mental rotation ability and basic numerical representations
Thompson, Jacqueline M.; Nuerk, Hans-Christoph; Moeller, Korbinian; Cohen Kadosh, Roi
2013-01-01
Mental rotation and number representation have both been studied widely, but although mental rotation has been linked to higher-level mathematical skills, to date it has not been shown whether mental rotation ability is linked to the most basic mental representation and processing of numbers. To investigate the possible connection between mental rotation abilities and numerical representation, 43 participants completed four tasks: 1) a standard pen-and-paper mental rotation task; 2) a multi-digit number magnitude comparison task assessing the compatibility effect, which indicates separate processing of decade and unit digits; 3) a number-line mapping task, which measures precision of number magnitude representation; and 4) a random number generation task, which yields measures both of executive control and of spatial number representations. Results show that mental rotation ability correlated significantly with both size of the compatibility effect and with number mapping accuracy, but not with any measures from the random number generation task. Together, these results suggest that higher mental rotation abilities are linked to more developed number representation, and also provide further evidence for the connection between spatial and numerical abilities. PMID:23933002
The relation between body semantics and spatial body representations.
van Elk, Michiel; Blanke, Olaf
2011-11-01
The present study addressed the relation between body semantics (i.e. semantic knowledge about the human body) and spatial body representations, by presenting participants with word pairs, one below the other, referring to body parts. The spatial position of the word pairs could be congruent (e.g. EYE / MOUTH) or incongruent (MOUTH / EYE) with respect to the spatial position of the words' referents. In addition, the spatial distance between the words' referents was varied, resulting in word pairs referring to body parts that are close (e.g. EYE / MOUTH) or far in space (e.g. EYE / FOOT). A spatial congruency effect was observed when subjects made an iconicity judgment (Experiments 2 and 3) but not when making a semantic relatedness judgment (Experiment 1). In addition, when making a semantic relatedness judgment (Experiment 1) reaction times increased with increased distance between the body parts but when making an iconicity judgment (Experiments 2 and 3) reaction times decreased with increased distance. These findings suggest that the processing of body-semantics results in the activation of a detailed visuo-spatial body representation that is modulated by the specific task requirements. We discuss these new data with respect to theories of embodied cognition and body semantics. Copyright © 2011 Elsevier B.V. All rights reserved.
The vestibular system: a spatial reference for bodily self-consciousness
Pfeiffer, Christian; Serino, Andrea; Blanke, Olaf
2014-01-01
Self-consciousness is the remarkable human experience of being a subject: the “I”. Self-consciousness is typically bound to a body, and particularly to the spatial dimensions of the body, as well as to its location and displacement in the gravitational field. Because the vestibular system encodes head position and movement in three-dimensional space, vestibular cortical processing likely contributes to spatial aspects of bodily self-consciousness. We review here recent data showing vestibular effects on first-person perspective (the feeling from where “I” experience the world) and self-location (the feeling where “I” am located in space). We compare these findings to data showing vestibular effects on mental spatial transformation, self-motion perception, and body representation showing vestibular contributions to various spatial representations of the body with respect to the external world. Finally, we discuss the role for four posterior brain regions that process vestibular and other multisensory signals to encode spatial aspects of bodily self-consciousness: temporoparietal junction, parietoinsular vestibular cortex, ventral intraparietal region, and medial superior temporal region. We propose that vestibular processing in these cortical regions is critical in linking multisensory signals from the body (personal and peripersonal space) with external (extrapersonal) space. Therefore, the vestibular system plays a critical role for neural representations of spatial aspects of bodily self-consciousness. PMID:24860446
Moscovitch, Morris; Rosenbaum, R Shayna; Gilboa, Asaf; Addis, Donna Rose; Westmacott, Robyn; Grady, Cheryl; McAndrews, Mary Pat; Levine, Brian; Black, Sandra; Winocur, Gordon; Nadel, Lynn
2005-01-01
We review lesion and neuroimaging evidence on the role of the hippocampus, and other structures, in retention and retrieval of recent and remote memories. We examine episodic, semantic and spatial memory, and show that important distinctions exist among different types of these memories and the structures that mediate them. We argue that retention and retrieval of detailed, vivid autobiographical memories depend on the hippocampal system no matter how long ago they were acquired. Semantic memories, on the other hand, benefit from hippocampal contribution for some time before they can be retrieved independently of the hippocampus. Even semantic memories, however, can have episodic elements associated with them that continue to depend on the hippocampus. Likewise, we distinguish between experientially detailed spatial memories (akin to episodic memory) and more schematic memories (akin to semantic memory) that are sufficient for navigation but not for re-experiencing the environment in which they were acquired. Like their episodic and semantic counterparts, the former type of spatial memory is dependent on the hippocampus no matter how long ago it was acquired, whereas the latter can survive independently of the hippocampus and is represented in extra-hippocampal structures. In short, the evidence reviewed suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences, no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial. We conclude that the evidence favours a multiple trace theory (MTT) of memory over two other models: (1) traditional consolidation models which posit that the hippocampus is a time-limited memory structure for all forms of memory; and (2) versions of cognitive map theory which posit that the hippocampus is needed for representing all forms of allocentric space in memory. PMID:16011544
Moscovitch, Morris; Rosenbaum, R Shayna; Gilboa, Asaf; Addis, Donna Rose; Westmacott, Robyn; Grady, Cheryl; McAndrews, Mary Pat; Levine, Brian; Black, Sandra; Winocur, Gordon; Nadel, Lynn
2005-07-01
We review lesion and neuroimaging evidence on the role of the hippocampus, and other structures, in retention and retrieval of recent and remote memories. We examine episodic, semantic and spatial memory, and show that important distinctions exist among different types of these memories and the structures that mediate them. We argue that retention and retrieval of detailed, vivid autobiographical memories depend on the hippocampal system no matter how long ago they were acquired. Semantic memories, on the other hand, benefit from hippocampal contribution for some time before they can be retrieved independently of the hippocampus. Even semantic memories, however, can have episodic elements associated with them that continue to depend on the hippocampus. Likewise, we distinguish between experientially detailed spatial memories (akin to episodic memory) and more schematic memories (akin to semantic memory) that are sufficient for navigation but not for re-experiencing the environment in which they were acquired. Like their episodic and semantic counterparts, the former type of spatial memory is dependent on the hippocampus no matter how long ago it was acquired, whereas the latter can survive independently of the hippocampus and is represented in extra-hippocampal structures. In short, the evidence reviewed suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences, no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial. We conclude that the evidence favours a multiple trace theory (MTT) of memory over two other models: (1) traditional consolidation models which posit that the hippocampus is a time-limited memory structure for all forms of memory; and (2) versions of cognitive map theory which posit that the hippocampus is needed for representing all forms of allocentric space in memory.
Opfer, John E; Thompson, Clarissa A; Furlong, Ellen E
2010-09-01
Numeric magnitudes often bias adults' spatial performance. Partly because the direction of this bias (left-to-right versus right-to-left) is culture-specific, it has been assumed that the orientation of spatial-numeric associations is a late development, tied to reading practice or schooling. Challenging this assumption, we found that preschoolers expected numbers to be ordered from left-to-right when they searched for objects in numbered containers, when they counted, and (to a lesser extent) when they added and subtracted. Further, preschoolers who lacked these biases demonstrated more immature, logarithmic representations of numeric value than preschoolers who exhibited the directional bias, suggesting that spatial-numeric associations aid magnitude representations for symbols denoting increasingly large numbers.
Body-Specific Representations of Spatial Location
ERIC Educational Resources Information Center
Brunye, Tad T.; Gardony, Aaron; Mahoney, Caroline R.; Taylor, Holly A.
2012-01-01
The body specificity hypothesis (Casasanto, 2009) posits that the way in which people interact with the world affects their mental representation of information. For instance, right- versus left-handedness affects the mental representation of affective valence, with right-handers categorically associating good with rightward areas and bad with…
From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium
ERIC Educational Resources Information Center
Chastenay, Pierre
2016-01-01
An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an…
Grasping the Muller-Lyer Illusion: The Contributions of Vision for Perception in Action
ERIC Educational Resources Information Center
van Doorn, Hemke; van der Kamp, John; Savelsbergh, Geert J. P.
2007-01-01
The present study examines the contributions of vision for perception processes in action. To this end, the influence of allocentric information on different action components (i.e., the selection of an appropriate mode of action, the pre-planning and online control of movement kinematics) is assessed. Participants (n = 10) were presented with a…
Belopolsky, Artem V; Theeuwes, Jan
2009-10-01
The present study systematically examined the role of attention in maintenance of spatial representations in working memory as proposed by the attention-based rehearsal hypothesis [Awh, E., Jonides, J., & Reuter-Lorenz, P. A. (1998). Rehearsal in spatial working memory. Journal of Experimental Psychology--Human Perception and Performance, 24(3), 780-790]. Three main issues were examined. First, Experiments 1-3 demonstrated that inhibition and not facilitation of visual processing is often observed at the memorized location during the retention interval. This inhibition was caused by keeping a location in memory and not by the exogenous nature of the memory cue. Second, Experiment 4 showed that inhibition of the memorized location does not lead to any significant impairment in memory accuracy. Finally, Experiment 5 connected current results to the previous findings and demonstrated facilitation of processing at the memorized location. Importantly, facilitation of processing did not lead to more accurate memory performance. The present results challenge the functional role of attention in maintenance of spatial working memory representations.
Biologically-inspired robust and adaptive multi-sensor fusion and active control
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.
Spatial Patterns in Alternative States and Thresholds: A Missing Link for Management of Landscapes?
USDA-ARS?s Scientific Manuscript database
The detection of threshold dynamics (and other dynamics of interest) would benefit from explicit representations of spatial patterns of disturbance, spatial dependence in responses to disturbance, and the spatial structure of feedbacks in the design of monitoring and management strategies. Spatially...
Balanced cortical microcircuitry for spatial working memory based on corrective feedback control.
Lim, Sukbin; Goldman, Mark S
2014-05-14
A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory-inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. Copyright © 2014 the authors 0270-6474/14/346790-17$15.00/0.
Negrón-Oyarzo, Ignacio; Espinosa, Nelson; Aguilar, Marcelo; Fuenzalida, Marco; Aboitiz, Francisco; Fuentealba, Pablo
2018-06-18
Learning the location of relevant places in the environment is crucial for survival. Such capacity is supported by a distributed network comprising the prefrontal cortex and hippocampus, yet it is not fully understood how these structures cooperate during spatial reference memory formation. Hence, we examined neural activity in the prefrontal-hippocampal circuit in mice during acquisition of spatial reference memory. We found that interregional oscillatory coupling increased with learning, specifically in the slow-gamma frequency (20 to 40 Hz) band during spatial navigation. In addition, mice used both spatial and nonspatial strategies to navigate and solve the task, yet prefrontal neuronal spiking and oscillatory phase coupling were selectively enhanced in the spatial navigation strategy. Lastly, a representation of the behavioral goal emerged in prefrontal spiking patterns exclusively in the spatial navigation strategy. These results suggest that reference memory formation is supported by enhanced cortical connectivity and evolving prefrontal spiking representations of behavioral goals.
The role of memory representation in the vigilance decrement.
Caggiano, Daniel M; Parasuraman, Raja
2004-10-01
Working memory load is critically important for the overall level of performance on vigilance tasks. However, its role in a key aspect of vigilance-sensitivity decrement over time-is unclear. We used a dual-task procedure in which either a spatial or a nonspatial working memory task was performed simultaneously with a spatial vigilance task for 20 min. Sensitivity in the vigilance task declined over time when the concurrent task involved spatial working memory. In contrast, there was no sensitivity decrement with a nonspatial working memory task. The results provide the first evidence of a specific role for working memory representation in vigilance decrement. The findings are also consistent with a multiple resource theory in which separate resources for memory representation and cognitive control operations are differentially susceptible to depletion over time, depending on the demands of the task at hand.
Think Spatial: The Representation in Mental Rotation Is Nonvisual
ERIC Educational Resources Information Center
Liesefeld, Heinrich R.; Zimmer, Hubert D.
2013-01-01
For mental rotation, introspection, theories, and interpretations of experimental results imply a certain type of mental representation, namely, visual mental images. Characteristics of the rotated representation can be examined by measuring the influence of stimulus characteristics on rotational speed. If the amount of a given type of information…
Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher
2017-09-05
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher
2017-01-01
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357
Meneghetti, Chiara; Muffato, Veronica; Varotto, Diego; De Beni, Rossana
2017-03-01
Previous studies found mental representations of route descriptions north-up oriented when egocentric experience (given by the protagonist's initial view) was congruent with the global reference system. This study examines: (a) the development and maintenance of representations derived from descriptions when the egocentric and global reference systems are congruent or incongruent; and (b) how spatial abilities modulate these representations. Sixty participants (in two groups of 30) heard route descriptions of a protagonist's moves starting from the bottom of a layout and headed mainly northwards (SN description) in one group, and headed south from the top (NS description, the egocentric view facing in the opposite direction to the canonical north) in the other. Description recall was tested with map drawing (after hearing the description a first and second time; i.e. Time 1 and 2) and South-North (SN) or North-South (NS) pointing tasks; and spatial objective tasks were administered. The results showed that: (a) the drawings were more rotated in NS than in SN descriptions, and performed better at Time 2 than at Time 1 for both types of description; SN pointing was more accurate than NS pointing for the SN description, while SN and NS pointing accuracy did not differ for the NS description; (b) spatial (rotation) abilities were related to recall accuracy for both types of description, but were more so for the NS ones. Overall, our results showed that the way in which spatial information is conveyed (with/without congruence between the egocentric and global reference systems) and spatial abilities influence the development and maintenance of mental representations.
ERIC Educational Resources Information Center
Fendler, Lynn; Smeyers, Paul
2015-01-01
Debates in science seem to depend on referential language-games, but in other senses they do not. This article addresses non-representational theory. It is a branch of newer approaches to cultural geography that strive to get a handle on spatial relationships not by representing them, but rather by presenting them. In this case, present connotes…
Think spatial: the representation in mental rotation is nonvisual.
Liesefeld, Heinrich R; Zimmer, Hubert D
2013-01-01
For mental rotation, introspection, theories, and interpretations of experimental results imply a certain type of mental representation, namely, visual mental images. Characteristics of the rotated representation can be examined by measuring the influence of stimulus characteristics on rotational speed. If the amount of a given type of information influences rotational speed, one can infer that it was contained in the rotated representation. In Experiment 1, rotational speed of university students (10 men, 11 women) was found to be influenced exclusively by the amount of represented orientation-dependent spatial-relational information but not by orientation-independent spatial-relational information, visual complexity, or the number of stimulus parts. As information in mental-rotation tasks is initially presented visually, this finding implies that at some point during each trial, orientation-dependent information is extracted from visual information. Searching for more direct evidence for this extraction, we recorded the EEG of another sample of university students (12 men, 12 women) during mental rotation of the same stimuli. In an early time window, the observed working memory load-dependent slow potentials were sensitive to the stimuli's visual complexity. Later, in contrast, slow potentials were sensitive to the amount of orientation-dependent information only. We conclude that only orientation-dependent information is contained in the rotated representation. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Deployment of spatial attention towards locations in memory representations. An EEG study.
Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J
2013-01-01
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.
Extended Maptree: a Representation of Fine-Grained Topology and Spatial Hierarchy of Bim
NASA Astrophysics Data System (ADS)
Wu, Y.; Shang, J.; Hu, X.; Zhou, Z.
2017-09-01
Spatial queries play significant roles in exchanging Building Information Modeling (BIM) data and integrating BIM with indoor spatial information. However, topological operators implemented for BIM spatial queries are limited to qualitative relations (e.g. touching, intersecting). To overcome this limitation, we propose an extended maptree model to represent the fine-grained topology and spatial hierarchy of indoor spaces. The model is based on a maptree which consists of combinatorial maps and an adjacency tree. Topological relations (e.g., adjacency, incidence, and covering) derived from BIM are represented explicitly and formally by extended maptrees, which can facilitate the spatial queries of BIM. To construct an extended maptree, we first use a solid model represented by vertical extrusion and boundary representation to generate the isolated 3-cells of combinatorial maps. Then, the spatial relationships defined in IFC are used to sew them together. Furthermore, the incremental edges of extended maptrees are labeled as removed 2-cells. Based on this, we can merge adjacent 3-cells according to the spatial hierarchy of IFC.
Comparing Tactile Maps and Haptic Digital Representations of a Maritime Environment
ERIC Educational Resources Information Center
Simonnet, Mathieu; Vieilledent, Steephane; Jacobson, R. Daniel; Tisseau, Jacques
2011-01-01
A map exploration and representation exercise was conducted with participants who were totally blind. Representations of maritime environments were presented either with a tactile map or with a digital haptic virtual map. We assessed the knowledge of spatial configurations using a triangulation technique. The results revealed that both types of…
DEM generation from contours and a low-resolution DEM
NASA Astrophysics Data System (ADS)
Li, Xinghua; Shen, Huanfeng; Feng, Ruitao; Li, Jie; Zhang, Liangpei
2017-12-01
A digital elevation model (DEM) is a virtual representation of topography, where the terrain is established by the three-dimensional co-ordinates. In the framework of sparse representation, this paper investigates DEM generation from contours. Since contours are usually sparsely distributed and closely related in space, sparse spatial regularization (SSR) is enforced on them. In order to make up for the lack of spatial information, another lower spatial resolution DEM from the same geographical area is introduced. In this way, the sparse representation implements the spatial constraints in the contours and extracts the complementary information from the auxiliary DEM. Furthermore, the proposed method integrates the advantage of the unbiased estimation of kriging. For brevity, the proposed method is called the kriging and sparse spatial regularization (KSSR) method. The performance of the proposed KSSR method is demonstrated by experiments in Shuttle Radar Topography Mission (SRTM) 30 m DEM and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) 30 m global digital elevation model (GDEM) generation from the corresponding contours and a 90 m DEM. The experiments confirm that the proposed KSSR method outperforms the traditional kriging and SSR methods, and it can be successfully used for DEM generation from contours.
Nicholls, Alastair P; Melia, Anne; Farmer, Eric W; Shaw, Gareth; Milne, Tracey; Stedmon, Alex; Sharples, Sarah; Cox, Gemma
2007-07-01
At present, air traffic controllers (ATCOs) exercise strict control over routing authority for aircraft movement in airspace. The onset of a free flight environment, however, may well result in a dramatic change to airspace jurisdictions, with aircraft movements for the large part being governed by aircrew, not ATCOs. The present study examined the impact of such changes on spatial memory for recent and non-recent locations of aircraft represented on a visual display. The experiment contrasted present conditions, in which permission for manoeuvres is granted by ATCOs, with potential free flight conditions, in which aircrew undertake deviations without explicit approval from ATCOs. Results indicated that the ATCO role adopted by participants impacted differently on short-term and long-term spatial representations of aircraft manoeuvres. Although informing participants of impending deviations has beneficial effects on spatial representations in the short term, long-term representations of spatial events are affected deleteriously by the presentation of subsequent information pertaining to other aircraft. This study suggests strongly that recognition of the perceptual and cognitive consequences of changing to a free flight environment is crucial if air safety is not to be jeopardized.
ERIC Educational Resources Information Center
Taylor, Roger S.; Grundstrom, Erika D.
2011-01-01
Given that astronomy heavily relies on visual representations it is especially likely for individuals to assume that instructional materials, such as visual representations of the Earth-Moon system (EMS), would be relatively accurate. However, in our research, we found that images in middle-school textbooks and educational webpages were commonly…
ERIC Educational Resources Information Center
Kastens, Kim A.; Pistolesi, Linda; Passow, Michael J.
2014-01-01
Research has shown that spatial thinking is important in science in general, and in Earth Science in particular, and that performance on spatially demanding tasks can be fostered through instruction. Because spatial thinking is rarely taught explicitly in the U.S. education system, improving spatial thinking may be "low-hanging fruit" as…
Auditory Spatial Attention Representations in the Human Cerebral Cortex
Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.
2014-01-01
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753
Augmenting cognitive architectures to support diagrammatic imagination.
Chandrasekaran, Balakrishnan; Banerjee, Bonny; Kurup, Unmesh; Lele, Omkar
2011-10-01
Diagrams are a form of spatial representation that supports reasoning and problem solving. Even when diagrams are external, not to mention when there are no external representations, problem solving often calls for internal representations, that is, representations in cognition, of diagrammatic elements and internal perceptions on them. General cognitive architectures--Soar and ACT-R, to name the most prominent--do not have representations and operations to support diagrammatic reasoning. In this article, we examine some requirements for such internal representations and processes in cognitive architectures. We discuss the degree to which DRS, our earlier proposal for such an internal representation for diagrams, meets these requirements. In DRS, the diagrams are not raw images, but a composition of objects that can be individuated and thus symbolized, while, unlike traditional symbols, the referent of the symbol is an object that retains its perceptual essence, namely, its spatiality. This duality provides a way to resolve what anti-imagists thought was a contradiction in mental imagery: the compositionality of mental images that seemed to be unique to symbol systems, and their support of a perceptual experience of images and some types of perception on them. We briefly review the use of DRS to augment Soar and ACT-R with a diagrammatic representation component. We identify issues for further research. Copyright © 2011 Cognitive Science Society, Inc.
Caldwell-Harris, Catherine L; Ayçiçegi, Ayse
2006-09-01
Because humans need both autonomy and interdependence, persons with either an extreme collectivist orientation (allocentrics) or extreme individualist values (idiocentrics) may be at risk for possession of some features of psychopathology. Is an extreme personality style a risk factor primarily when it conflicts with the values of the surrounding society? Individualism-collectivism scenarios and a battery of clinical and personality scales were administered to nonclinical samples of college students in Boston and Istanbul. For students residing in a highly individualistic society (Boston), collectivism scores were positively correlated with depression, social anxiety, obsessive-compulsive disorder and dependent personality. Individualism scores, particularly horizontal individualism, were negatively correlated with these same scales. A different pattern was obtained for students residing in a collectivist culture, Istanbul. Here individualism (and especially horizontal individualism) was positively correlated with scales for paranoid, schizoid, narcissistic, borderline and antisocial personality disorder. Collectivism (particularly vertical collectivism) was associated with low report of symptoms on these scales. These results indicate that having a personality style which conflicts with the values of society is associated with psychiatric symptoms. Having an orientation inconsistent with societal values may thus be a risk factor for poor mental health.
Three-Dimensional Dispaly Of Document Set
Lantrip, David B.; Pennock, Kelly A.; Pottier, Marc C.; Schur, Anne; Thomas, James J.; Wise, James A.
2003-06-24
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
Three-dimensional display of document set
Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA
2006-09-26
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may e transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
Three-dimensional display of document set
Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA
2001-10-02
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
Three-dimensional display of document set
Lantrip, David B [Oxnard, CA; Pennock, Kelly A [Richland, WA; Pottier, Marc C [Richland, WA; Schur, Anne [Richland, WA; Thomas, James J [Richland, WA; Wise, James A [Richland, WA; York, Jeremy [Bothell, WA
2009-06-30
A method for spatializing text content for enhanced visual browsing and analysis. The invention is applied to large text document corpora such as digital libraries, regulations and procedures, archived reports, and the like. The text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The three-dimensional representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts' effort.
The impact of conflicting spatial representations in airborne unmanned aerial system sensor control
2016-02-01
Their methodology, however, was limited – participants were only seated in a forward configured seat in a civilian aircraft and only rudimentary...a starboard seat , facing towards the center of the aircraft , great discord between these spatial representations and their relevant sensory inputs...configuration provided space for three participants to be run at a time in three different seating orientations: forward, backward, and center of the aircraft
The role of memory representation in the vigilance decrement
CAGGIANO, DANIEL M.; PARASURAMAN, RAJA
2005-01-01
Working memory load is critically important for the overall level of performance on vigilance tasks. However, its role in a key aspect of vigilance—sensitivity decrement over time—is unclear. We used a dual-task procedure in which either a spatial or a nonspatial working memory task was performed simultaneously with a spatial vigilance task for 20 min. Sensitivity in the vigilance task declined over time when the concurrent task involved spatial working memory. In contrast, there was no sensitivity decrement with a nonspatial working memory task. The results provide the first evidence of a specific role for working memory representation in vigilance decrement. The findings are also consistent with a multiple resource theory in which separate resources for memory representation and cognitive control operations are differentially susceptible to depletion over time, depending on the demands of the task at hand. PMID:15732706
Aesthetic issues in spatial composition: representational fit and the role of semantic context.
Sammartino, Jonathan; Palmer, Stephen E
2012-01-01
Previous research on aesthetic preference for spatial compositions has shown robust, systematic preferences for object locations within frames and for object perspectives. In the present experiment, we show that these preferences can be dramatically altered by changing the contextual meaning of an image through pairing it with different titles, as predicted by a theoretical account in terms of "representational fit". People prefer standard (default) compositions with a neutral title that merely describes the content of the picture (eg side-view of a plane with the title "Flying") but nonstandard compositions when they "fit" a title with compatible spatial implications (eg rear-view of a plane with the title "Departing"). The results are discussed in terms of their implications for theories based on representational fit versus perceptual and conceptual fluency and with their implications for classic aesthetic accounts in terms of preference for novelty through violating expectations.
Rauscher, Larissa; Kohn, Juliane; Käser, Tanja; Mayer, Verena; Kucian, Karin; McCaskey, Ursina; Esser, Günter; von Aster, Michael
2016-01-01
Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.
Environmental boundaries as a mechanism for correcting and anchoring spatial maps
2016-01-01
Abstract Ubiquitous throughout the animal kingdom, path integration‐based navigation allows an animal to take a circuitous route out from a home base and using only self‐motion cues, calculate a direct vector back. Despite variation in an animal's running speed and direction, medial entorhinal grid cells fire in repeating place‐specific locations, pointing to the medial entorhinal circuit as a potential neural substrate for path integration‐based spatial navigation. Supporting this idea, grid cells appear to provide an environment‐independent metric representation of the animal's location in space and preserve their periodic firing structure even in complete darkness. However, a series of recent experiments indicate that spatially responsive medial entorhinal neurons depend on environmental cues in a more complex manner than previously proposed. While multiple types of landmarks may influence entorhinal spatial codes, environmental boundaries have emerged as salient landmarks that both correct error in entorhinal grid cells and bind internal spatial representations to the geometry of the external spatial world. The influence of boundaries on error correction and grid symmetry points to medial entorhinal border cells, which fire at a high rate only near environmental boundaries, as a potential neural substrate for landmark‐driven control of spatial codes. The influence of border cells on other entorhinal cell populations, such as grid cells, could depend on plasticity, raising the possibility that experience plays a critical role in determining how external cues influence internal spatial representations. PMID:26563618
The development of spatial behaviour and the hippocampal neural representation of space
Wills, Thomas J.; Muessig, Laurenz; Cacucci, Francesca
2014-01-01
The role of the hippocampal formation in spatial cognition is thought to be supported by distinct classes of neurons whose firing is tuned to an organism's position and orientation in space. In this article, we review recent research focused on how and when this neural representation of space emerges during development: each class of spatially tuned neurons appears at a different age, and matures at a different rate, but all the main spatial responses tested so far are present by three weeks of age in the rat. We also summarize the development of spatial behaviour in the rat, describing how active exploration of space emerges during the third week of life, the first evidence of learning in formal tests of hippocampus-dependent spatial cognition is observed in the fourth week, whereas fully adult-like spatial cognitive abilities require another few weeks to be achieved. We argue that the development of spatially tuned neurons needs to be considered within the context of the development of spatial behaviour in order to achieve an integrated understanding of the emergence of hippocampal function and spatial cognition. PMID:24366148
Effects of spatial training on transitive inference performance in humans and rhesus monkeys
Gazes, Regina Paxton; Lazareva, Olga F.; Bergene, Clara N.; Hampton, Robert R.
2015-01-01
It is often suggested that transitive inference (TI; if A>B and B>C then A>C) involves mentally representing overlapping pairs of stimuli in a spatial series. However, there is little direct evidence to unequivocally determine the role of spatial representation in TI. We tested whether humans and rhesus monkeys use spatial representations in TI by training them to organize seven images in a vertical spatial array. Then, we presented subjects with a TI task using these same images. The implied TI order was either congruent or incongruent with the order of the trained spatial array. Humans in the congruent condition learned premise pairs more quickly, and were faster and more accurate in critical probe tests, suggesting that the spatial arrangement of images learned during spatial training influenced subsequent TI performance. Monkeys first trained in the congruent condition also showed higher test trial accuracy when the spatial and inferred orders were congruent. These results directly support the hypothesis that humans solve TI problems by spatial organization, and suggest that this cognitive mechanism for inference may have ancient evolutionary roots. PMID:25546105
Subliminal access to abstract face representations does not rely on attention.
Harry, Bronson; Davis, Chris; Kim, Jeesun
2012-03-01
The present study used masked repetition priming to examine whether face representations can be accessed without attention. Two experiments using a face recognition task (fame judgement) presented masked repetition and control primes in spatially unattended locations prior to target onset. Experiment 1 (n=20) used the same images as primes and as targets and Experiment 2 (n=17) used different images of the same individual as primes and targets. Repetition priming was observed across both experiments regardless of whether spatial attention was cued to the location of the prime. Priming occurred for both famous and non-famous targets in Experiment 1 but was only reliable for famous targets in Experiment 2, suggesting that priming in Experiment 1 indexed access to view-specific representations whereas priming in Experiment 2 indexed access to view-invariant, abstract representations. Overall, the results indicate that subliminal access to abstract face representations does not rely on attention. Copyright © 2011 Elsevier Inc. All rights reserved.
Differentiating Spatial Memory from Spatial Transformations
ERIC Educational Resources Information Center
Street, Whitney N.; Wang, Ranxiao Frances
2014-01-01
The perspective-taking task is one of the most common paradigms used to study the nature of spatial memory, and better performance for certain orientations is generally interpreted as evidence of spatial representations using these reference directions. However, performance advantages can also result from the relative ease in certain…
Prut, L; Prenosil, G; Willadt, S; Vogt, K; Fritschy, J-M; Crestani, F
2010-07-01
The memory for location of objects, which binds information about objects to discrete positions or spatial contexts of occurrence, is a form of episodic memory particularly sensitive to hippocampal damage. Its early decline is symptomatic for elderly dementia. Substances that selectively reduce alpha5-GABA(A) receptor function are currently developed as potential cognition enhancers for Alzheimer's syndrome and other dementia, consistent with genetic studies implicating these receptors that are highly expressed in hippocampus in learning performance. Here we explored the consequences of reduced GABA(A)alpha5-subunit contents, as occurring in alpha5(H105R) knock-in mice, on the memory for location of objects. This required the behavioral characterization of alpha5(H105R) and wild-type animals in various tasks examining learning and memory retrieval strategies for objects, locations, contexts and their combinations. In mutants, decreased amounts of alpha5-subunits and retained long-term potentiation in hippocampus were confirmed. They exhibited hyperactivity with conserved circadian rhythm in familiar actimeters, and normal exploration and emotional reactivity in novel places, allocentric spatial guidance, and motor pattern learning acquisition, inhibition and flexibility in T- and eight-arm mazes. Processing of object, position and context memories and object-guided response learning were spared. Genotype difference in object-in-place memory retrieval and in encoding and response learning strategies for object-location combinations manifested as a bias favoring object-based recognition and guidance strategies over spatial processing of objects in the mutants. These findings identify in alpha5(H105R) mice a behavioral-cognitive phenotype affecting basal locomotion and the memory for location of objects indicative of hippocampal dysfunction resulting from moderately decreased alpha5-subunit contents.
Johnson, Jeffrey S; Spencer, John P
2016-05-01
Studies examining the relationship between spatial attention and spatial working memory (SWM) have shown that discrimination responses are faster for targets appearing at locations that are being maintained in SWM, and that location memory is impaired when attention is withdrawn during the delay. These observations support the proposal that sustained attention is required for successful retention in SWM: If attention is withdrawn, memory representations are likely to fail, increasing errors. In the present study, this proposal was reexamined in light of a neural-process model of SWM. On the basis of the model's functioning, we propose an alternative explanation for the observed decline in SWM performance when a secondary task is performed during retention: SWM representations drift systematically toward the location of targets appearing during the delay. To test this explanation, participants completed a color discrimination task during the delay interval of a spatial-recall task. In the critical shifting-attention condition, the color stimulus could appear either toward or away from the midline reference axis, relative to the memorized location. We hypothesized that if shifting attention during the delay leads to the failure of SWM representations, there should be an increase in the variance of recall errors, but no change in directional errors, regardless of the direction of the shift. Conversely, if shifting attention induces drift of SWM representations-as predicted by the model-systematic changes in the patterns of spatial-recall errors should occur that would depend on the direction of the shift. The results were consistent with the latter possibility-recall errors were biased toward the locations of discrimination targets appearing during the delay.
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
Noise Analysis of Spatial Phase coding in analog Acoustooptic Processors
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Optical beams can carry information in their amplitude and phase; however, optical analog numerical calculators such as an optical matrix processor use incoherent light to achieve linear operation. Thus, the phase information is lost and only the magnitude can be used. This limits such processors to the representation of positive real numbers. Many systems have been devised to overcome this deficit through the use of digital number representations, but they all operate at a greatly reduced efficiency in contrast to analog systems. The most widely accepted method to achieve sign coding in analog optical systems has been the use of an offset for the zero level. Unfortunately, this results in increased noise sensitivity for small numbers. In this paper, we examine the use of spatially coherent sign coding in acoustooptical processors, a method first developed for digital calculations by D. V. Tigin. This coding technique uses spatial coherence for the representation of signed numbers, while temporal incoherence allows for linear analog processing of the optical information. We show how spatial phase coding reduces noise sensitivity for signed analog calculations.
Play in two societies: pervasiveness of process, specificity of structure.
Bornstein, M H; Haynes, O M; Pascual, L; Painter, K M; Galperín, C
1999-01-01
The present study compared Argentine (N = 39) and U.S. (N = 43) children and their mothers on exploratory, symbolic, and social play and interaction when children were 20 months of age. Patterns of cultural similarity and difference emerged. In both cultures, boys engaged in more exploratory play than girls, and girls engaged in more symbolic play than boys; mothers of boys engaged in more exploratory play than mothers of girls, and mothers of girls engaged in more symbolic play than mothers of boys. Moreover, in both cultures, individual variation in children's exploratory and symbolic play was specifically associated with individual variation in mothers' exploratory and symbolic play, respectively. Between cultures, U.S. children and their mothers engaged in more exploratory play, whereas Argentine children and their mothers engaged in more symbolic play. Moreover, Argentine mothers exceeded U.S. mothers in social play and verbal praise of their children. During an early period of mental and social growth, general developmental processes in play may be pervasive, but dyadic and cultural structures are apparently specific. Overall, Argentine and U.S. dyads utilized different modes of exploration, representation, and interaction--emphasizing "other-directed" acts of pretense versus "functional" and "combinatorial" exploration, for example--and these individual and dyadic allocentric versus idiocentric stresses accord with larger cultural concerns of collectivism versus individualism in the two societies.
Virtual Human Analogs to Rodent Spatial Pattern Separation and Completion Memory Tasks
ERIC Educational Resources Information Center
Paleja, Meera; Girard, Todd A.; Christensen, Bruce K.
2011-01-01
Spatial pattern separation (SPS) and spatial pattern completion (SPC) have played an increasingly important role in computational and rodent literatures as processes underlying associative memory. SPS and SPC are complementary processes, allowing the formation of unique representations and the reconstruction of complete spatial environments based…
Hacked Landscapes: Tensions, Borders, and Positionality in Spatial Literacy
ERIC Educational Resources Information Center
Schmidt, Sandra J.
2017-01-01
By focusing on critical geographies, landscape, and spatial literacy, this article evaluates a semester-long spatial justice project conducted in a preservice teacher education program. The analysis recognizes the limitations of reading the products literally as a means of comprehending spatial representation. It expands the analysis by hacking…
The functional architecture of the ventral temporal cortex and its role in categorization
Grill-Spector, Kalanit; Weiner, Kevin S.
2014-01-01
Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370
Orienting numbers in mental space: horizontal organization trumps vertical.
Holmes, Kevin J; Lourenco, Stella F
2012-01-01
While research on the spatial representation of number has provided substantial evidence for a horizontally oriented mental number line, recent studies suggest vertical organization as well. Directly comparing the relative strength of horizontal and vertical organization, however, we found no evidence of spontaneous vertical orientation (upward or downward), and horizontal trumped vertical when pitted against each other (Experiment 1). Only when numbers were conceptualized as magnitudes (as opposed to nonmagnitude ordinal sequences) did reliable vertical organization emerge, with upward orientation preferred (Experiment 2). Altogether, these findings suggest that horizontal representations predominate, and that vertical representations, when elicited, may be relatively inflexible. Implications for spatial organization beyond number, and its ontogenetic basis, are discussed.
The Fractions SNARC Revisited: Processing Fractions on a Consistent Mental Number Line.
Toomarian, Elizabeth Y; Hubbard, Edward M
2017-07-12
The ability to understand fractions is key to establishing a solid foundation in mathematics, yet children and adults struggle to comprehend them. Previous studies have suggested that these struggles emerge because people fail to process fraction magnitude holistically on the mental number line (MNL), focusing instead on fraction components (Bonato et al. 2007). Subsequent studies have produced evidence for default holistic processing (Meert et al., 2009; 2010), but examined only magnitude processing, not spatial representations. We explored the spatial representations of fractions on the MNL in a series of three experiments: Experiment 1 replicated Bonato et al. (2007); 30 naïve undergraduates compared unit fractions (1/1-1/9) to 1/5, resulting in a reverse SNARC effect. Experiment 2 countered potential strategic biases induced by the limited set of fractions used by Bonato et al. by expanding the stimulus set to include all irreducible, single-digit proper fractions, and asked participants to compare them against 1/2. We observed a classic SNARC effect, completely reversing the pattern from Experiment 1. Together, Experiments 1 and 2 demonstrate that stimulus properties dramatically impact spatial representations of fractions. In Experiment 3, we demonstrated within-subjects reliability of the SNARC effect across both a fractions and whole number comparison task. Our results suggest that adults can indeed process fraction magnitudes holistically, and that their spatial representations occur on a consistent MNL for both whole numbers and fractions.
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
Getting the Big Picture: Development of Spatial Scaling Abilities
ERIC Educational Resources Information Center
Frick, Andrea; Newcombe, Nora S.
2012-01-01
Spatial scaling is an integral aspect of many spatial tasks that involve symbol-to-referent correspondences (e.g., map reading, drawing). In this study, we asked 3-6-year-olds and adults to locate objects in a two-dimensional spatial layout using information from a second spatial representation (map). We examined how scaling factor and reference…
NASA Astrophysics Data System (ADS)
Voisin, Nathalie; Hejazi, Mohamad I.; Leung, L. Ruby; Liu, Lu; Huang, Maoyi; Li, Hong-Yi; Tesfa, Teklu
2017-05-01
Realistic representations of sectoral water withdrawals and consumptive demands and their allocation to surface and groundwater sources are important for improving modeling of the integrated water cycle. To inform future model development, we enhance the representation of water management in a regional Earth system (ES) model with a spatially distributed allocation of sectoral water demands simulated by a regional integrated assessment (IA) model to surface and groundwater systems. The integrated modeling framework (IA-ES) is evaluated by analyzing the simulated regulated flow and sectoral supply deficit in major hydrologic regions of the conterminous U.S, which differ from ES studies looking at water storage variations. Decreases in historical supply deficit are used as metrics to evaluate IA-ES model improvement in representating the complex sectoral human activities for assessing future adaptation and mitigation strategies. We also assess the spatial changes in both regulated flow and unmet demands, for irrigation and nonirrigation sectors, resulting from the individual and combined additions of groundwater and return flow modules. Results show that groundwater use has a pronounced regional and sectoral effect by reducing water supply deficit. The effects of sectoral return flow exhibit a clear east-west contrast in the hydrologic patterns, so the return flow component combined with the IA sectoral demands is a major driver for spatial redistribution of water resources and water deficits in the US. Our analysis highlights the need for spatially distributed sectoral representation of water management to capture the regional differences in interbasin redistribution of water resources and deficits.
Number-space mapping in human infants.
de Hevia, Maria Dolores; Spelke, Elizabeth S
2010-05-01
Mature representations of number are built on a core system of numerical representation that connects to spatial representations in the form of a mental number line. The core number system is functional in early infancy, but little is known about the origins of the mapping of numbers onto space. In this article, we show that preverbal infants transfer the discrimination of an ordered series of numerosities to the discrimination of an ordered series of line lengths. Moreover, infants construct relationships between numbers and line lengths when they are habituated to unordered pairings that vary positively, but not when they are habituated to unordered pairings that vary inversely. These findings provide evidence that a predisposition to relate representations of numerical magnitude to spatial length develops early in life. A central foundation of mathematics, science, and technology therefore emerges prior to experience with language, symbol systems, or measurement devices.
Jadhav, Shantanu P.; Rothschild, Gideon; Roumis, Demetris K.; Frank, Loren M.
2016-01-01
SUMMARY Interactions between the hippocampus and prefrontal cortex (PFC) are critical for learning and memory. Hippocampal activity during awake sharp wave ripple (SWR) events is important for spatial learning, and hippocampal SWR activity often represents past or potential future experiences. Whether or how this reactivation engages the PFC, and how reactivation might interact with ongoing patterns of PFC activity remains unclear. We recorded hippocampal CA1 and PFC activity in animals learning spatial tasks and found that many PFC cells showed spiking modulation during SWRs. Unlike in CA1, SWR-related activity in PFC comprised both excitation and inhibition of distinct populations. Within individual SWRs, excitation activated PFC cells with representations related to the concurrently reactivated hippocampal representation, while inhibition suppressed PFC cells with unrelated representations. Thus, awake SWRs mark times of strong coordination between hippocampus and PFC that reflects structured reactivation of representations related to ongoing experience. PMID:26971950
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-02-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.
Non-spatial neglect for the mental number line.
van Dijck, Jean-Philippe; Gevers, Wim; Lafosse, Christophe; Doricchi, Fabrizio; Fias, Wim
2011-07-01
Several psychophysical investigations, expanding the classical introspective observations by Galton, have suggested that the mental representation of numbers takes the form of a number line along which magnitude is positioned in ascending order according to reading habits, i.e. from left to right in Western cultures. In keeping with the evidence, pathological rightward deviations in the bisection of number intervals due to right brain damage are generally interpreted as originating from a purely spatial-attentional deficit in the processing of the left side of number intervals. However, consistent double dissociations between defective processing of the left side of physical and mental number space have called into question the universality of this interpretation. Recent evidence suggests a link between rightward deviations in number space and defective memory for both spatial and non-spatial sequences of items. Here we describe the case of a left brain-damaged patient exhibiting right-sided neglect for extrapersonal and representational space, and left-sided neglect on the mental number line. Accurate neuropsychological examination revealed that the apparent left-sided neglect in the bisection of number intervals had a purely non-spatial origin and was based on mnemonic difficulties for the initial items of verbal sequences presented visually at an identical spatial position. These findings show that effective position-based verbal working memory might be crucial for numerical tasks that are usually considered to involve purely spatial representation of numerical magnitudes. Copyright © 2011 Elsevier Ltd. All rights reserved.
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object’s offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth’s gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects’ location. PMID:26910260
Distributed encoding of spatial and object categories in primate hippocampal microcircuits
Opris, Ioan; Santos, Lucas M.; Gerhardt, Greg A.; Song, Dong; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.
2015-01-01
The primate hippocampus plays critical roles in the encoding, representation, categorization and retrieval of cognitive information. Such cognitive abilities may use the transformational input-output properties of hippocampal laminar microcircuitry to generate spatial representations and to categorize features of objects, images, and their numeric characteristics. Four nonhuman primates were trained in a delayed-match-to-sample (DMS) task while multi-neuron activity was simultaneously recorded from the CA1 and CA3 hippocampal cell fields. The results show differential encoding of spatial location and categorization of images presented as relevant stimuli in the task. Individual hippocampal cells encoded visual stimuli only on specific types of trials in which retention of either, the Sample image, or the spatial position of the Sample image indicated at the beginning of the trial, was required. Consistent with such encoding, it was shown that patterned microstimulation applied during Sample image presentation facilitated selection of either Sample image spatial locations or types of images, during the Match phase of the task. These findings support the existence of specific codes for spatial and numeric object representations in primate hippocampus which can be applied on differentially signaled trials. Moreover, the transformational properties of hippocampal microcircuitry, together with the patterned microstimulation are supporting the practical importance of this approach for cognitive enhancement and rehabilitation, needed for memory neuroprosthetics. PMID:26500473
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.
Local spatial frequency analysis for computer vision
NASA Technical Reports Server (NTRS)
Krumm, John; Shafer, Steven A.
1990-01-01
A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.
Attention reduces spatial uncertainty in human ventral temporal cortex.
Kay, Kendrick N; Weiner, Kevin S; Grill-Spector, Kalanit
2015-03-02
Ventral temporal cortex (VTC) is the latest stage of the ventral "what" visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3-5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal "where" visual pathway [6-10]. Here, we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.
Attention reduces spatial uncertainty in human ventral temporal cortex
Kay, Kendrick N.; Weiner, Kevin S.; Grill-Spector, Kalanit
2014-01-01
SUMMARY Ventral temporal cortex (VTC) is the latest stage of the ventral ‘what’ visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3–5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal ‘where’ visual pathway [6–10]. Here we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. PMID:25702580
Deconstructing Visual Scenes in Cortex: Gradients of Object and Spatial Layout Information
Kravitz, Dwight J.; Baker, Chris I.
2013-01-01
Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity. PMID:22473894
Demir, Özlem Ece; Prado, Jérôme; Booth, James R.
2015-01-01
We examined the relation of parental socioeconomic status (SES) to the neural bases of subtraction in school-age children (9- to 12-year-olds). We independently localized brain regions subserving verbal versus visuo-spatial representations to determine whether the parental SES-related differences in children’s reliance on these neural representations vary as a function of math skill. At higher SES levels, higher skill was associated with greater recruitment of the left temporal cortex, identified by the verbal localizer. At lower SES levels, higher skill was associated with greater recruitment of right parietal cortex, identified by the visuo-spatial localizer. This suggests that depending on parental SES, children engage different neural systems to solve subtraction problems. Furthermore, SES was related to the activation in the left temporal and frontal cortex during the independent verbal localizer task, but it was not related to activation during the independent visuo-spatial localizer task. Differences in activation during the verbal localizer task in turn were related to differences in activation during the subtraction task in right parietal cortex. The relation was stronger at lower SES levels. This result suggests that SES-related differences in the visuo-spatial regions during subtraction might be based in SES-related verbal differences. PMID:25664675
Roth, Zvi N
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.
Roth, Zvi N.
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455
One Spatial Map or Many? Spatial Coding of Connected Environments
ERIC Educational Resources Information Center
Han, Xue; Becker, Suzanna
2014-01-01
We investigated how humans encode large-scale spatial environments using a virtual taxi game. We hypothesized that if 2 connected neighborhoods are explored jointly, people will form a single integrated spatial representation of the town. However, if the neighborhoods are first learned separately and later observed to be connected, people will…
NASA Astrophysics Data System (ADS)
Geng, Guannan; Zhang, Qiang; Martin, Randall V.; Lin, Jintai; Huo, Hong; Zheng, Bo; Wang, Siwen; He, Kebin
2017-03-01
Spatial proxies used in bottom-up emission inventories to derive the spatial distributions of emissions are usually empirical and involve additional levels of uncertainty. Although uncertainties in current emission inventories have been discussed extensively, uncertainties resulting from improper spatial proxies have rarely been evaluated. In this work, we investigate the impact of spatial proxies on the representation of gridded emissions by comparing six gridded NOx emission datasets over China developed from the same magnitude of emissions and different spatial proxies. GEOS-Chem-modeled tropospheric NO2 vertical columns simulated from different gridded emission inventories are compared with satellite-based columns. The results show that differences between modeled and satellite-based NO2 vertical columns are sensitive to the spatial proxies used in the gridded emission inventories. The total population density is less suitable for allocating NOx emissions than nighttime light data because population density tends to allocate more emissions to rural areas. Determining the exact locations of large emission sources could significantly strengthen the correlation between modeled and observed NO2 vertical columns. Using vehicle population and an updated road network for the on-road transport sector could substantially enhance urban emissions and improve the model performance. When further applying industrial gross domestic product (IGDP) values for the industrial sector, modeled NO2 vertical columns could better capture pollution hotspots in urban areas and exhibit the best performance of the six cases compared to satellite-based NO2 vertical columns (slope = 1.01 and R2 = 0. 85). This analysis provides a framework for information from satellite observations to inform bottom-up inventory development. In the future, more effort should be devoted to the representation of spatial proxies to improve spatial patterns in bottom-up emission inventories.
ERIC Educational Resources Information Center
Sarno, Emilia
2012-01-01
This contribution explains the connection between spatial intelligence and spatial competences and by indicating how the first is the cognitive matrix of abilities necessary to move in space as well as to represent it. Indeed, two are principal factors involved in the spatial intelligence: orientation and representation. Both are based on a close…
Kühn, S; Gleich, T; Lorenz, R C; Lindenberger, U; Gallinat, J
2014-02-01
Video gaming is a highly pervasive activity, providing a multitude of complex cognitive and motor demands. Gaming can be seen as an intense training of several skills. Associated cerebral structural plasticity induced has not been investigated so far. Comparing a control with a video gaming training group that was trained for 2 months for at least 30 min per day with a platformer game, we found significant gray matter (GM) increase in right hippocampal formation (HC), right dorsolateral prefrontal cortex (DLPFC) and bilateral cerebellum in the training group. The HC increase correlated with changes from egocentric to allocentric navigation strategy. GM increases in HC and DLPFC correlated with participants' desire for video gaming, evidence suggesting a predictive role of desire in volume change. Video game training augments GM in brain areas crucial for spatial navigation, strategic planning, working memory and motor performance going along with evidence for behavioral changes of navigation strategy. The presented video game training could therefore be used to counteract known risk factors for mental disease such as smaller hippocampus and prefrontal cortex volume in, for example, post-traumatic stress disorder, schizophrenia and neurodegenerative disease.
Toward a visuospatial developmental account of sequence-space synesthesia
Price, Mark C.; Pearson, David G.
2013-01-01
Sequence-space synesthetes experience some sequences (e.g., numbers, calendar units) as arranged in spatial forms, i.e., spatial patterns in their mind's eye or even outside their body. Various explanations have been offered for this phenomenon. Here we argue that these spatial forms are continuous with varieties of non-synesthetic visuospatial imagery and share their central characteristics. This includes their dynamic and elaborative nature, their involuntary feel, and consistency over time. Drawing from literatures on mental imagery and working memory, we suggest how the initial acquisition and subsequent elaboration of spatial forms could be accounted for in terms of the known developmental trajectory of visuospatial representations. This extends from the formation of image-based representations of verbal material in childhood to the later maturation of dynamic control of imagery. Individual differences in the development of visuospatial style also account for variation in the character of spatial forms, e.g., in terms of distinctions such as visual versus spatial imagery, or ego-centric versus object-based transformations. PMID:24187538
Tse, Chi-Shing; Kurby, Christopher A.; Du, Feng
2010-01-01
We examined the effect of spatial iconicity (a perceptual simulation of canonical locations of objects) and word-order frequency on language processing and episodic memory of orientation. Participants made speeded relatedness judgments to pairs of words presented in locations typical to their real world arrangements (e.g., ceiling on top and floor on bottom). They then engaged in a surprise orientation recognition task for the word pairs. We replicated Louwerse’s finding (2008) that word-order frequency has a stronger effect on semantic relatedness judgments than spatial iconicity. This is consistent with recent suggestions that linguistic representations have a stronger impact on immediate decisions about verbal materials than perceptual simulations. In contrast, spatial iconicity enhanced episodic memory of orientation to a greater extent than word-order frequency did. This new finding indicates that perceptual simulations have an important role in episodic memory. Results are discussed with respect to theories of perceptual representation and linguistic processing. PMID:19742388
Combining Multiple Forms Of Visual Information To Specify Contact Relations In Spatial Layout
NASA Astrophysics Data System (ADS)
Sedgwick, Hal A.
1990-03-01
An expert system, called Layout2, has been described, which models a subset of available visual information for spatial layout. The system is used to examine detailed interactions between multiple, partially redundant forms of information in an environment-centered geometrical model of an environment obeying certain rather general constraints. This paper discusses the extension of Layout2 to include generalized contact relations between surfaces. In an environment-centered model, the representation of viewer-centered distance is replaced by the representation of environmental location. This location information is propagated through the representation of the environment by a network of contact relations between contiguous surfaces. Perspective information interacts with other forms of information to specify these contact relations. The experimental study of human perception of contact relations in extended spatial layouts is also discussed. Differences between human results and Layout2 results reveal limitations in the human ability to register available information; they also point to the existence of certain forms of information not yet formalized in Layout2.
Sutton, Jennifer E; Buset, Melanie; Keller, Mikayla
2014-01-01
A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context.
Sutton, Jennifer E.; Buset, Melanie; Keller, Mikayla
2014-01-01
A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context. PMID:24603608
An analysis of spatial representativeness of air temperature monitoring stations
NASA Astrophysics Data System (ADS)
Liu, Suhua; Su, Hongbo; Tian, Jing; Wang, Weizhen
2018-05-01
Surface air temperature is an essential variable for monitoring the atmosphere, and it is generally acquired at meteorological stations that can provide information about only a small area within an r m radius ( r-neighborhood) of the station, which is called the representable radius. In studies on a local scale, ground-based observations of surface air temperatures obtained from scattered stations are usually interpolated using a variety of methods without ascertaining their effectiveness. Thus, it is necessary to evaluate the spatial representativeness of ground-based observations of surface air temperature before conducting studies on a local scale. The present study used remote sensing data to estimate the spatial distribution of surface air temperature using the advection-energy balance for air temperature (ADEBAT) model. Two target stations in the study area were selected to conduct an analysis of spatial representativeness. The results showed that one station (AWS 7) had a representable radius of about 400 m with a possible error of less than 1 K, while the other station (AWS 16) had the radius of about 250 m. The representable radius was large when the heterogeneity of land cover around the station was small.
Towler, John; Kelly, Maria; Eimer, Martin
2016-06-01
The capacity of visual working memory for faces is extremely limited, but the reasons for these limitations remain unknown. We employed event-related brain potential measures to demonstrate that individual faces have to be focally attended in order to be maintained in working memory, and that attention is allocated to only a single face at a time. When 2 faces have to be memorized simultaneously in a face identity-matching task, the focus of spatial attention during encoding predicts which of these faces can be successfully maintained in working memory and matched to a subsequent test face. We also show that memory representations of attended faces are maintained in a position-dependent fashion. These findings demonstrate that the limited capacity of face memory is directly linked to capacity limits of spatial attention during the encoding and maintenance of individual face representations. We suggest that the capacity and distribution of selective spatial attention is a dynamic resource that constrains the capacity and fidelity of working memory for faces. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Guerra, Ernesto; Knoeferle, Pia
2018-01-01
Existing evidence has shown a processing advantage (or facilitation) when representations derived from a non-linguistic context (spatial proximity depicted by gambling cards moving together) match the semantic content of an ensuing sentence. A match, inspired by conceptual metaphors such as 'similarity is closeness' would, for instance, involve cards moving closer together and the sentence relates similarity between abstract concepts such as war and battle. However, other studies have reported a disadvantage (or interference) for congruence between the semantic content of a sentence and representations of spatial distance derived from this sort of non-linguistic context. In the present article, we investigate the cognitive mechanisms underlying the interaction between the representations of spatial distance and sentence processing. In two eye-tracking experiments, we tested the predictions of a mechanism that considers the competition, activation, and decay of visually and linguistically derived representations as key aspects in determining the qualitative pattern and time course of that interaction. Critical trials presented two playing cards, each showing a written abstract noun; the cards turned around, obscuring the nouns, and moved either farther apart or closer together. Participants then read a sentence expressing either semantic similarity or difference between these two nouns. When instructed to attend to the nouns on the cards (Experiment 1), participants' total reading times revealed interference between spatial distance (e.g., closeness) and semantic relations (similarity) as soon as the sentence explicitly conveyed similarity. But when instructed to attend to the cards (Experiment 2), cards approaching (vs. moving apart) elicited first interference (when similarity was implicit) and then facilitation (when similarity was made explicit) during sentence reading. We discuss these findings in the context of a competition mechanism of interference and facilitation effects.
ERIC Educational Resources Information Center
Notebaert, Wim; Gevers, Wim; Verguts, Tom; Fias, Wim
2006-01-01
In 4 experiments, the authors investigated the reversal of spatial congruency effects when participants concurrently practiced incompatible mapping rules (J. G. Marble & R. W. Proctor, 2000). The authors observed an effect of an explicit spatially incompatible mapping rule on the way numerical information was associated with spatial responses. The…
Monocular Patching May Induce Ipsilateral “Where” Spatial Bias
Chen, Peii; Erdahl, Lillian; Barrett, Anna M.
2009-01-01
Spatial bias is an asymmetry of perception and/or representation of spatial information —“where” bias —, or of spatially directed actions — “aiming” bias. A monocular patch may induce contralateral “where” spatial bias (the Sprague effect; Sprague (1966) Science, 153, 1544–1547). However, an ipsilateral patch-induced spatial bias may be observed if visual occlusion results in top-down, compensatory re-allocation of spatial perceptual or representational resources toward the region of visual deprivation. Tactile distraction from a monocular patch may also contribute to an ipsilateral bias. To examine these hypotheses, neurologically normal adults bisected horizontal lines at baseline without a patch, while wearing a monocular patch, and while wearing tactile-only and visual-only monocular occlusion. We fractionated “where” and “aiming” spatial bias components using a video apparatus to reverse visual feedback for half of the test trials. The results support monocular patch-induced ipsilateral “where” spatial errors, which are not consistent with the Sprague effect. Further, the present findings suggested that the induced ipsilateral bias may be primarily induced by visual deprivation, consistent with compensatory “where” resource re-allocation. PMID:19100274
NASA Astrophysics Data System (ADS)
Boschetti, Fabio; Thouret, Valerie; Nedelec, Philippe; Chen, Huilin; Gerbig, Christoph
2015-04-01
Airborne platforms have their main strength in the ability of collecting mixing ratio and meteorological data at different heights across a vertical profile, allowing an insight in the internal structure of the atmosphere. However, rental airborne platforms are usually expensive, limiting the number of flights that can be afforded and hence on the amount of data that can be collected. To avoid this disadvantage, the MOZAIC/IAGOS (Measurements of Ozone and water vapor by Airbus In-service airCraft/In-service Aircraft for a Global Observing System) program makes use of commercial airliners, providing data on a regular basis. It is therefore considered an important tool in atmospheric investigations. However, due to the nature of said platforms, MOZAIC/IAGOS's profiles are located near international airports, which are usually significant emission sources, and are in most cases close to major urban settlements, characterized by higher anthropogenic emissions compared to rural areas. When running transport models at finite resolution, these local emissions can heavily affect measurements resulting in biases in model/observation mismatch. Model/observation mismatch can include different aspects in both horizontal and vertical direction, for example spatial and temporal resolution of the modeled fluxes, or poorly represented convective transport or turbulent mixing in the boundary layer. In the framework of the IGAS (IAGOS for GMES Atmospheric Service) project, whose aim is to improve connections between data collected by MOZAIC/IAGOS and Copernicus Atmospheric Service, the present study is focused on the effect of the spatial resolution of emission fluxes, referred to here as representation error. To investigate this, the Lagrangian transport model STILT (Stochastic Time Inverted Lagrangian Transport) was coupled with EDGAR (Emission Database for Global Atmospheric Research) version-4.3 emission inventory at European regional scale. EDGAR's simulated fluxes for CO, CO2 and CH4 with a spatial resolution of 10x10 km for the time frame 2006-2011 was be aggregated into coarser and coarser grid cells in order to evaluate the representation error at different spatial scales. The dependence of representation error from wind direction and month of the year was evaluated for different location in the European domain, for both random and bias component. The representation error was then validated against the model-data mismatch derived from the comparison of MACC (Monitoring Atmospheric Composition and Climate) reanalysis with IAGOS observations for CO to investigate its suitability for modeling applications. We found that the random and bias components of the representation error show a similar pattern dependent on wind direction. In addition, we found a clear linear relationship between the representation error and the model-data mismatch for both (random and bias) components, indicating that about 50% of the model-data mismatch is related to the representation error. This suggests that the representation error derived using STILT provides useful information for better understanding causes for model-data mismatch.
Changing Race Relations in Organizations: A Comparison of Theories.
1985-03-01
collective term, is used to characterize individuals whose behavior is strongly influenced by how it will affect others. In contrast, idiocentric is the...term for individuals who give more weight to how their behavior will affect themselves rather than others. Triandis (1983) refers to an allocentric...and applying them to the affect , cognitions, and behavior of investigators as well as of respondents. It means bringing organization theory to the
Modeling Mental Spatial Reasoning about Cardinal Directions
ERIC Educational Resources Information Center
Schultheis, Holger; Bertel, Sven; Barkowsky, Thomas
2014-01-01
This article presents research into human mental spatial reasoning with orientation knowledge. In particular, we look at reasoning problems about cardinal directions that possess multiple valid solutions (i.e., are spatially underdetermined), at human preferences for some of these solutions, and at representational and procedural factors that lead…
Making Space for Spatial Proportions
ERIC Educational Resources Information Center
Matthews, Percival G.; Hubbard, Edward M.
2017-01-01
The three target articles presented in this special issue converged on an emerging theme: the importance of spatial proportional reasoning. They suggest that the ability to map between symbolic fractions (like 1/5) and nonsymbolic, spatial representations of their sizes or "magnitudes" may be especially important for building robust…
ERIC Educational Resources Information Center
Noordzij, Matthijs L.; Zuidhoek, Sander; Postma, Albert
2006-01-01
The purpose of the present study is twofold: the first objective is to evaluate the importance of visual experience for the ability to form a spatial representation (spatial mental model) of fairly elaborate spatial descriptions. Secondly, we examine whether blind people exhibit the same preferences (i.e. level of performance on spatial tasks) as…
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
Some practicable applications of quadtree data structures/representation in astronomy
NASA Technical Reports Server (NTRS)
Pasztor, L.
1992-01-01
Development of quadtree as hierarchical data structuring technique for representing spatial data (like points, regions, surfaces, lines, curves, volumes, etc.) has been motivated to a large extent by storage requirements of images, maps, and other multidimensional (spatially structured) data. For many spatial algorithms, time-efficiency of quadtrees in terms of execution may be as important as their space-efficiency concerning storage conditions. Briefly, the quadtree is a class of hierarchical data structures which is based on the recursive partition of a square region into quadrants and sub-quadrants until a predefined limit. Beyond the wide applicability of quadtrees in image processing, spatial information analysis, and building digital databases (processes becoming ordinary for the astronomical community), there may be numerous further applications in astronomy. Some of these practicable applications based on quadtree representation of astronomical data are presented and suggested for further considerations. Examples are shown for use of point as well as region quadtrees. Statistics of different leaf and non-leaf nodes (homogeneous and heterogeneous sub-quadrants respectively) at different levels may provide useful information on spatial structure of astronomical data in question. By altering the principle guiding the decomposition process, different types of spatial data may be focused on. Finally, a sampling method based on quadtree representation of an image is proposed which may prove to be efficient in the elaboration of sampling strategy in a region where observations were carried out previously either with different resolution or/and in different bands.
Galati, Alexia; Avraamides, Marios N
2013-01-01
Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers' linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner's viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout's structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., "to your right") when the partner's viewpoint was misaligned by a small offset or coincided with the layout's structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones) to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in non-spatial tasks.
Galati, Alexia; Avraamides, Marios N.
2013-01-01
Research on spatial perspective-taking often focuses on the cognitive processes of isolated individuals as they adopt or maintain imagined perspectives. Collaborative studies of spatial perspective-taking typically examine speakers' linguistic choices, while overlooking their underlying processes and representations. We review evidence from two collaborative experiments that examine the contribution of social and representational cues to spatial perspective choices in both language and the organization of spatial memory. Across experiments, speakers organized their memory representations according to the convergence of various cues. When layouts were randomly configured and did not afford intrinsic cues, speakers encoded their partner's viewpoint in memory, if available, but did not use it as an organizing direction. On the other hand, when the layout afforded an intrinsic structure, speakers organized their spatial memories according to the person-centered perspective reinforced by the layout's structure. Similarly, in descriptions, speakers considered multiple cues whether available a priori or at the interaction. They used partner-centered expressions more frequently (e.g., “to your right”) when the partner's viewpoint was misaligned by a small offset or coincided with the layout's structure. Conversely, they used egocentric expressions more frequently when their own viewpoint coincided with the intrinsic structure or when the partner was misaligned by a computationally difficult, oblique offset. Based on these findings we advocate for a framework for flexible perspective-taking: people weigh multiple cues (including social ones) to make attributions about the relative difficulty of perspective-taking for each partner, and adapt behavior to minimize their collective effort. This framework is not specialized for spatial reasoning but instead emerges from the same principles and memory-depended processes that govern perspective-taking in non-spatial tasks. PMID:24133432
Teacher Spatial Skills Are Linked to Differences in Geometry Instruction
ERIC Educational Resources Information Center
Otumfuor, Beryl Ann; Carr, Martha
2017-01-01
Background: Spatial skills have been linked to better performance in mathematics. Aim The purpose of this study was to examine the relationship between teacher spatial skills and their instruction, including teacher content and pedagogical knowledge, use of pictorial representations, and use of gestures during geometry instruction. Sample:…
Spatial allocation of forest recreation value
Kenneth A. Baerenklau; Armando Gonzalez-Caban; Catrina Paez; Edgard Chavez
2009-01-01
Non-market valuation methods and geographic information systems are useful planning and management tools for public land managers. Recent attention has been given to investigation and demonstration of methods for combining these tools to provide spatially-explicit representations of non-market value. Most of these efforts have focused on spatial allocation of...
A Principal Components Analysis of Dynamic Spatial Memory Biases
ERIC Educational Resources Information Center
Motes, Michael A.; Hubbard, Timothy L.; Courtney, Jon R.; Rypma, Bart
2008-01-01
Research has shown that spatial memory for moving targets is often biased in the direction of implied momentum and implied gravity, suggesting that representations of the subjective experiences of these physical principles contribute to such biases. The present study examined the association between these spatial memory biases. Observers viewed…
Development of Working Memory for Verbal-Spatial Associations
ERIC Educational Resources Information Center
Cowan, Nelson; Saults, J. Scott; Morey, Candice C.
2006-01-01
Verbal-to-spatial associations in working memory may index a core capacity for abstract information limited in the amount concurrently retained. However, what look like associative, abstract representations could instead reflect verbal and spatial codes held separately and then used in parallel. We investigated this issue in two experiments on…
Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.
1991-01-01
Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.
Number games, magnitude representation, and basic number skills in preschoolers.
Whyte, Jemma Catherine; Bull, Rebecca
2008-03-01
The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was compared following four 25-min intervention sessions. The linear number board game significantly improved children's performance in all posttest measures and facilitated a shift from a logarithmic to a linear representation of numerical magnitude, emphasizing the importance of spatial cues in estimation. Exposure to the number card games involving nonsymbolic magnitude judgments and association of symbolic and nonsymbolic quantities, but without any linear spatial cues, improved some aspects of children's basic number skills but not numerical estimation precision.
Is the Classroom Obsolete in the Twenty-First Century?
ERIC Educational Resources Information Center
Benade, Leon
2017-01-01
Lefebvre's triadic conception of "spatial practice, representations of space and representational spaces" provides the theoretical framework of this article, which recognises a productive relationship between space and social relations. Its writing stems from a current and ongoing qualitative study of innovative teaching and learning…
Mapping Children--Mapping Space.
ERIC Educational Resources Information Center
Pick, Herbert L., Jr.
Research is underway concerning the way the perception, conception, and representation of spatial layout develops. Three concepts are important here--space itself, frame of reference, and cognitive map. Cognitive map refers to a form of representation of the behavioral space, not paired associate or serial response learning. Other criteria…
Update on "What" and "Where" in Spatial Language: A New Division of Labor for Spatial Terms.
Landau, Barbara
2017-03-01
In this article, I revisit Landau and Jackendoff's () paper, "What and where in spatial language and spatial cognition," proposing a friendly amendment and reformulation. The original paper emphasized the distinct geometries that are engaged when objects are represented as members of object kinds (named by count nouns), versus when they are represented as figure and ground in spatial expressions (i.e., play the role of arguments of spatial prepositions). We provided empirical and theoretical arguments for the link between these distinct representations in spatial language and their accompanying nonlinguistic neural representations, emphasizing the "what" and "where" systems of the visual system. In the present paper, I propose a second division of labor between two classes of spatial prepositions in English that appear to be quite distinct. One class includes prepositions such as in and on, whose core meanings engage force-dynamic, functional relationships between objects, with geometry only a marginal player. The second class includes prepositions such as above/below and right/left, whose core meanings engage geometry, with force-dynamic relationships a passing or irrelevant variable. The insight that objects' force-dynamic relationships matter to spatial terms' uses is not new; but thinking of these terms as a distinct set within spatial language has theoretical and empirical consequences that are new. I propose three such consequences, rooted in the fact that geometric knowledge is highly constrained and early-emerging in life, while force-dynamic knowledge of objects and their interactions is relatively unconstrained and needs to be learned piecemeal over a lengthy timeline. First, the two classes will engage different learning problems, with different developmental trajectories for both first and second language learners; second, the classes will naturally lead to different degrees of cross-linguistic variation; and third, they may be rooted in different neural representations. Copyright © 2016 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Drap, P.; Papini, O.; Pruno, E.; Nucciotti, M.; Vannini, G.
2017-02-01
The paper presents some reflexions concerning an interdisciplinary project between Medieval Archaeologists from the University of Florence (Italy) and ICT researchers from CNRS LSIS of Marseille (France), aiming towards a connection between 3D spatial representation and archaeological knowledge. It is well known that Laser Scanner, Photogrammetry and Computer Vision are very attractive tools for archaeologists, although the integration of representation of space and representation of archaeological time has not yet found a methodological standard of reference. We try to develop an integrated system for archaeological 3D survey and all other types of archaeological data and knowledge through integrating observable (material) and non-graphic (interpretive) data. Survey plays a central role, since it is both a metric representation of the archaeological site and, to a wider extent, an interpretation of it (being also a common basis for communication between the 2 teams). More specifically 3D survey is crucial, allowing archaeologists to connect actual spatial assets to the stratigraphic formation processes (i.e. to the archaeological time) and to translate spatial observations into historical interpretation of the site. We propose a common formalism for describing photogrammetrical survey and archaeological knowledge stemming from ontologies: Indeed, ontologies are fully used to model and store 3D data and archaeological knowledge. Xe equip this formalism with a qualitative representation of time. Stratigraphic analyses (both of excavated deposits and of upstanding structures) are closely related to E. C. Harris theory of "Stratigraphic Unit" ("US" from now on). Every US is connected to the others by geometric, topological and, eventually, temporal links, and are recorded by the 3D photogrammetric survey. However, the limitations of the Harris Matrix approach lead to use another representation formalism for stratigraphic relationships, namely Qualitative Constraints Networks (QCN) successfully used in the domain of knowledge representation and reasoning in artificial intelligence for representing temporal relations.
Diffeomorphism Group Representations in Relativistic Quantum Field Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldin, Gerald A.; Sharp, David H.
We explore the role played by the di eomorphism group and its unitary representations in relativistic quantum eld theory. From the quantum kinematics of particles described by representations of the di eomorphism group of a space-like surface in an inertial reference frame, we reconstruct the local relativistic neutral scalar eld in the Fock representation. An explicit expression for the free Hamiltonian is obtained in terms of the Lie algebra generators (mass and momentum densities). We suggest that this approach can be generalized to elds whose quanta are spatially extended objects.
DNA methylation regulates neurophysiological spatial representation in memory formation
Roth, Eric D.; Roth, Tania L.; Money, Kelli M.; SenGupta, Sonda; Eason, Dawn E.; Sweatt, J. David
2015-01-01
Epigenetic mechanisms including altered DNA methylation are critical for altered gene transcription subserving synaptic plasticity and the retention of learned behavior. Here we tested the idea that one role for activity-dependent altered DNA methylation is stabilization of cognition-associated hippocampal place cell firing in response to novel place learning. We observed that a behavioral protocol (spatial exploration of a novel environment) known to induce hippocampal place cell remapping resulted in alterations of hippocampal Bdnf DNA methylation. Further studies using neurophysiological in vivo single unit recordings revealed that pharmacological manipulations of DNA methylation decreased long-term but not short-term place field stability. Together our data highlight a role for DNA methylation in regulating neurophysiological spatial representation and memory formation. PMID:25960947
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
Interaction Between Spatial and Feature Attention in Posterior Parietal Cortex
Ibos, Guilhem; Freedman, David J.
2016-01-01
Summary Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task which required monkeys to detect specific conjunctions of color, motion-direction, and stimulus position. Here we show that FBA and SBA potentiate each other’s effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. PMID:27499082
The Spatial Distribution of Attention within and across Objects
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.
2011-01-01
Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a “grouped array”). Using a modified version of the Egly et al. object cuing task, we systematically manipulated within-object distance and object boundaries. Four major findings are reported: 1) spatial attention forms a gradient across the attended object; 2) object boundaries limit the distribution of this gradient, with the spread of attention constrained by a boundary; 3) boundaries within an object operate similarly to across-object boundaries: we observed object-based effects across a discontinuity within a single object, without the demand to divide or switch attention between discrete object representations; and 4) the gradient of spatial attention across an object directly modulates perceptual sensitivity, implicating a relatively early locus for the grouped array representation. PMID:21728455
A map of abstract relational knowledge in the human hippocampal-entorhinal cortex.
Garvert, Mona M; Dolan, Raymond J; Behrens, Timothy Ej
2017-04-27
The hippocampal-entorhinal system encodes a map of space that guides spatial navigation. Goal-directed behaviour outside of spatial navigation similarly requires a representation of abstract forms of relational knowledge. This information relies on the same neural system, but it is not known whether the organisational principles governing continuous maps may extend to the implicit encoding of discrete, non-spatial graphs. Here, we show that the human hippocampal-entorhinal system can represent relationships between objects using a metric that depends on associative strength. We reconstruct a map-like knowledge structure directly from a hippocampal-entorhinal functional magnetic resonance imaging adaptation signal in a situation where relationships are non-spatial rather than spatial, discrete rather than continuous, and unavailable to conscious awareness. Notably, the measure that best predicted a behavioural signature of implicit knowledge and blood oxygen level-dependent adaptation was a weighted sum of future states, akin to the successor representation that has been proposed to account for place and grid-cell firing patterns.
The roles of categorical and coordinate spatial relations in recognizing buildings.
Palermo, Liana; Piccardi, Laura; Nori, Raffaella; Giusberti, Fiorella; Guariglia, Cecilia
2012-11-01
Categorical spatial information is considered more useful for recognizing objects, and coordinate spatial information for guiding actions--for example, during navigation or grasping. In contrast with this assumption, we hypothesized that buildings, unlike other categories of objects, require both categorical and coordinate spatial information in order to be recognized. This hypothesis arose from evidence that right-brain-damaged patients have deficits in both coordinate judgments and recognition of buildings and from the fact that buildings are very useful for guiding navigation in urban environments. To test this hypothesis, we assessed 210 healthy college students while they performed four different tasks that required categorical and coordinate judgments and the recognition of common objects and buildings. Our results showed that both categorical and coordinate spatial representations are necessary to recognize a building, whereas only categorical representations are necessary to recognize an object. We discuss our data in view of a recent neural framework for visuospatial processing, suggesting that recognizing buildings may specifically activate the parieto-medial-temporal pathway.
Interaction between Spatial and Feature Attention in Posterior Parietal Cortex.
Ibos, Guilhem; Freedman, David J
2016-08-17
Lateral intraparietal (LIP) neurons encode a vast array of sensory and cognitive variables. Recently, we proposed that the flexibility of feature representations in LIP reflect the bottom-up integration of sensory signals, modulated by feature-based attention (FBA), from upstream feature-selective cortical neurons. Moreover, LIP activity is also strongly modulated by the position of space-based attention (SBA). However, the mechanisms by which SBA and FBA interact to facilitate the representation of task-relevant spatial and non-spatial features in LIP remain unclear. We recorded from LIP neurons during performance of a task that required monkeys to detect specific conjunctions of color, motion direction, and stimulus position. Here we show that FBA and SBA potentiate each other's effect in a manner consistent with attention gating the flow of visual information along the cortical visual pathway. Our results suggest that linear bottom-up integrative mechanisms allow LIP neurons to emphasize task-relevant spatial and non-spatial features. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhao, Hui; Chen, Chuansheng; Zhang, Hongchuan; Zhou, Xinlin; Mei, Leilei; Chen, Chunhui; Chen, Lan; Cao, Zhongyu; Dong, Qi
2012-01-01
Using an artificial-number learning paradigm and the ERP technique, the present study investigated neural mechanisms involved in the learning of magnitude and spatial order. 54 college students were divided into 2 groups matched in age, gender, and school major. One group was asked to learn the associations between magnitude (dot patterns) and the meaningless Gibson symbols, and the other group learned the associations between spatial order (horizontal positions on the screen) and the same set of symbols. Results revealed differentiated neural mechanisms underlying the learning processes of symbolic magnitude and spatial order. Compared to magnitude learning, spatial-order learning showed a later and reversed distance effect. Furthermore, an analysis of the order-priming effect showed that order was not inherent to the learning of magnitude. Results of this study showed a dissociation between magnitude and order, which supports the numerosity code hypothesis of mental representations of magnitude. PMID:23185363
Agarwal, Sri Mahavir; Shivakumar, Venkataram; Kalmady, Sunil V; Danivas, Vijay; Amaresha, Anekal C; Bose, Anushree; Narayanaswamy, Janardhanan C; Amorim, Michel-Ange; Venkatasubramanian, Ganesan
2017-08-31
Perspective-taking ability is an essential spatial faculty that is of much interest in both health and neuropsychiatric disorders. There is limited data on the neural correlates of perspective taking in the context of a realistic three-dimensional environment. We report the results of a pilot study exploring the same in eight healthy volunteers. Subjects underwent two runs of an experiment in a 3 Tesla magnetic resonance imaging (MRI) involving alternate blocks of a first-person perspective based allocentric object location memory task (OLMT), a third-person perspective based egocentric visual perspective taking task (VPRT), and a table task (TT) that served as a control. Difference in blood oxygen level dependant response during task performance was analyzed using Statistical Parametric Mapping software, version 12. Activations were considered significant if they survived family-wise error correction at the cluster level using a height threshold of p <0.001, uncorrected at the voxel level. A significant difference in accuracy and reaction time based on task type was found. Subjects had significantly lower accuracy in VPRT compared to TT. Accuracy in the two active tasks was not significantly different. Subjects took significantly longer in the VPRT in comparison to TT. Reaction time in the two active tasks was not significantly different. Functional MRI revealed significantly higher activation in the bilateral visual cortex and left temporoparietal junction (TPJ) in VPRT compared to OLMT. The results underscore the importance of TPJ in egocentric manipulation in healthy controls in the context of reality-based spatial tasks.
Sahan, Muhammet Ikbal; Verguts, Tom; Boehler, Carsten Nicolas; Pourtois, Gilles; Fias, Wim
2016-08-01
Selective attention is not limited to information that is physically present in the external world, but can also operate on mental representations in the internal world. However, it is not known whether the mechanisms of attentional selection operate in similar fashions in physical and mental space. We studied the spatial distributions of attention for items in physical and mental space by comparing how successfully distractors were rejected at varying distances from the attended location. The results indicated very similar distribution characteristics of spatial attention in physical and mental space. Specifically, we found that performance monotonically improved with increasing distractor distance relative to the attended location, suggesting that distractor confusability is particularly pronounced for nearby distractors, relative to distractors farther away. The present findings suggest that mental representations preserve their spatial configuration in working memory, and that similar mechanistic principles underlie selective attention in physical and in mental space.
Some Components of Geometric Knowledge of Future Elementary School Teachers
ERIC Educational Resources Information Center
Debrenti, Edith
2016-01-01
Geometric experience, spatial representation, spatial visualization, understanding the world around us, and developing the ability of spatial reasoning are fundamental aims in the teaching of mathematics. (Freudenthal, 1972) Learning is a process which involves advancing from level to level. In primary school the focus is on the first two levels…
The lasting memory enhancements of retrospective attention
Reaves, Sarah; Strunk, Jonathan; Phillips, Shekinah; Verhaeghen, Paul; Duarte, Audrey
2016-01-01
Behavioral research has shown that spatial cues that orient attention toward task relevant items being maintained in visual short-term memory (VSTM) enhance item memory accuracy. However, it is unknown if these retrospective attentional cues (“retro-cues”) enhance memory beyond typical short-term memory delays. It is also unknown whether retro-cues affect the spatial information associated with VSTM representations. Emerging evidence suggests that processes that affect short-term memory maintenance may also affect long-term memory (LTM) but little work has investigated the role of attention in LTM. In the current event-related potential (ERP) study, we investigated the duration of retrospective attention effects and the impact of retrospective attention manipulations on VSTM representations. Results revealed that retro-cueing improved both VSTM and LTM memory accuracy and that posterior maximal ERPs observed during VSTM maintenance predicted subsequent LTM performance. N2pc ERPs associated with attentional selection were attenuated by retro-cueing suggesting that retrospective attention may disrupt maintenance of spatial configural information in VSTM. Collectively, these findings suggest that retrospective attention can alter the structure of memory representations, which impacts memory performance beyond short-term memory delays. PMID:27038756
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Orienting attention to locations in mental representations
Astle, Duncan Edward; Summerfield, Jennifer; Griffin, Ivan; Nobre, Anna Christina
2014-01-01
Many cognitive processes depend on our ability to hold information in mind, often well beyond the offset of the original sensory input. The capacity of this ‘visual short-term memory’ (VSTM) is limited to around three to four items. Recent research has demonstrated that the content of VSTM can be modulated by top-down attentional biases. This has been demonstrated using retrodictive spatial cues, termed ‘retro-cues’, which orient participants’ attention to spatial locations within VSTM. In the current paper, we tested whether the use of these cues is modulated by memory load and cue delay. There are a number of important conclusions: i) top-down biases can operate upon very brief iconic traces as well as older VSTM representations (Experiment 1); ii) when operating within capacity, subjects use the cue to prioritize where they initiate their memory search, rather than to discard un-cued items (Experiments 2 and 3); iii) when capacity is exceeded there is little benefit to top-down biasing relative to a neutral condition, however, unattended items are lost, with there being a substantial cost of invalid spatial cueing (Experiment 3); iv) these costs and benefits of orienting spatial attention differ across iconic memory and VSTM representations when VSTM capacity is exceeded (Experiment 4). PMID:21972046
A tesselated probabilistic representation for spatial robot perception and navigation
NASA Technical Reports Server (NTRS)
Elfes, Alberto
1989-01-01
The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations.
Reasoning with inaccurate spatial knowledge. [for Planetary Rover
NASA Technical Reports Server (NTRS)
Doshi, Rajkumar S.; White, James E.; Lam, Raymond; Atkinson, David J.
1988-01-01
This paper describes work in progress on spatial planning for a semiautonomous mobile robot vehicle. The overall objective is to design a semiautonomous rover to plan routes in unknown, natural terrains. The approach to spatial planning involves deduction of common-sense spatial knowledge using geographical information, natural terrain representations, and assimilation of new and possibly conflicting terrain information. This report describes the ongoing research and implementation.
Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention
Noppeney, Uta
2018-01-01
Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567
Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A
2012-08-01
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
Medical Image Retrieval Using Multi-Texton Assignment.
Tang, Qiling; Yang, Jirong; Xia, Xianfu
2018-02-01
In this paper, we present a multi-texton representation method for medical image retrieval, which utilizes the locality constraint to encode each filter bank response within its local-coordinate system consisting of the k nearest neighbors in texton dictionary and subsequently employs spatial pyramid matching technique to implement feature vector representation. Comparison with the traditional nearest neighbor assignment followed by texton histogram statistics method, our strategies reduce the quantization errors in mapping process and add information about the spatial layout of texton distributions and, thus, increase the descriptive power of the image representation. We investigate the effects of different parameters on system performance in order to choose the appropriate ones for our datasets and carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method has superior performance.
State-Based Delay Representation and Its Transfer from a Game of Pong to Reaching and Tracking
Leib, Raz; Pressman, Assaf; Simo, Lucia S.; Karniel, Amir
2017-01-01
Abstract To accurately estimate the state of the body, the nervous system needs to account for delays between signals from different sensory modalities. To investigate how such delays may be represented in the sensorimotor system, we asked human participants to play a virtual pong game in which the movement of the virtual paddle was delayed with respect to their hand movement. We tested the representation of this new mapping between the hand and the delayed paddle by examining transfer of adaptation to blind reaching and blind tracking tasks. These blind tasks enabled to capture the representation in feedforward mechanisms of movement control. A Time Representation of the delay is an estimation of the actual time lag between hand and paddle movements. A State Representation is a representation of delay using current state variables: the distance between the paddle and the ball originating from the delay may be considered as a spatial shift; the low sensitivity in the response of the paddle may be interpreted as a minifying gain; and the lag may be attributed to a mechanical resistance that influences paddle’s movement. We found that the effects of prolonged exposure to the delayed feedback transferred to blind reaching and tracking tasks and caused participants to exhibit hypermetric movements. These results, together with simulations of our representation models, suggest that delay is not represented based on time, but rather as a spatial gain change in visuomotor mapping. PMID:29379875
The Koslowski-Sahlmann representation: quantum configuration space
NASA Astrophysics Data System (ADS)
Campiglia, Miguel; Varadarajan, Madhavan
2014-09-01
The Koslowski-Sahlmann (KS) representation is a generalization of the representation underlying the discrete spatial geometry of loop quantum gravity (LQG), to accommodate states labelled by smooth spatial geometries. As shown recently, the KS representation supports, in addition to the action of the holonomy and flux operators, the action of operators which are the quantum counterparts of certain connection dependent functions known as ‘background exponentials’. Here we show that the KS representation displays the following properties which are the exact counterparts of LQG ones: (i) the abelian * algebra of SU(2) holonomies and ‘U(1)’ background exponentials can be completed to a C* algebra, (ii) the space of semianalytic SU(2) connections is topologically dense in the spectrum of this algebra, (iii) there exists a measure on this spectrum for which the KS Hilbert space is realized as the space of square integrable functions on the spectrum, (iv) the spectrum admits a characterization as a projective limit of finite numbers of copies of SU(2) and U(1), (v) the algebra underlying the KS representation is constructed from cylindrical functions and their derivations in exactly the same way as the LQG (holonomy-flux) algebra except that the KS cylindrical functions depend on the holonomies and the background exponentials, this extra dependence being responsible for the differences between the KS and LQG algebras. While these results are obtained for compact spaces, they are expected to be of use for the construction of the KS representation in the asymptotically flat case.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
NASA Technical Reports Server (NTRS)
Peuquet, Donna J.
1987-01-01
A new approach to building geographic data models that is based on the fundamental characteristics of the data is presented. An overall theoretical framework for representing geographic data is proposed. An example of utilizing this framework in a Geographic Information System (GIS) context by combining artificial intelligence techniques with recent developments in spatial data processing techniques is given. Elements of data representation discussed include hierarchical structure, separation of locational and conceptual views, and the ability to store knowledge at variable levels of completeness and precision.
Neonatal Atlas Construction Using Sparse Representation
Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883
Mental map and spatial thinking
NASA Astrophysics Data System (ADS)
Vanzella Castellar, Sonia Maria; Cristiane Strina Juliasz, Paula
2018-05-01
The spatial thinking is a central concept in our researches at the Faculty of Education of University of São Paulo (FE-USP). The cartography is fundamental to this kind of thinking, because it contributes to the development of the representation of space. The spatial representations are the drawings - mental maps - maps, chart, aerial photos, satellite images, graphics and diagrams. To think spatially - including the contents and concepts geographical and their representations - also corresponds to reason, defined by the skills the individual develops to understand the structure, function of a space, and describe your organization and relation to other spaces. The aim of this paper is to analyze the role of mental maps in the development of concepts of city and landscape - structuring concepts for school geography. The purpose is to analyze how students in Geography and Pedagogy - future teachers - and young children in Early Childhood Education think, feel, and appropriate these concepts. The analys is indicates the importance of developing mental map in activities with pedagogy and geography graduate student to know that students at school can be producers of maps. Cartography is a language and allows the student to develop the spatial and temporal relationships and notions such as orientation, distance and location, learning the concepts of geographical science. Mental maps present the basic features of the location such as the conditions - the features verified in one place - and the connections that is to understand how this place connects to other places.
Auditory spatial representations of the world are compressed in blind humans.
Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J
2017-02-01
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.
Matsumoto, Yuji; Takaki, Yasuhiro
2014-06-15
Horizontally scanning holography can enlarge both screen size and viewing zone angle. A microelectromechanical-system spatial light modulator, which can generate only binary images, is used to generate hologram patterns. Thus, techniques to improve gray-scale representation in reconstructed images should be developed. In this study, the error diffusion technique was used for the binarization of holograms. When the Floyd-Steinberg error diffusion coefficients were used, gray-scale representation was improved. However, the linearity in the gray-scale representation was not satisfactory. We proposed the use of a correction table and showed that the linearity was greatly improved.
Does Changing the Reference Frame Affect Infant Categorization of the Spatial Relation BETWEEN?
ERIC Educational Resources Information Center
Quinn, Paul C.; Doran, Matthew M.; Papafragou, Anna
2011-01-01
Past research has shown that variation in the target objects depicting a given spatial relation disrupts the formation of a category representation for that relation. In the current research, we asked whether changing the orientation of the referent frame depicting the spatial relation would also disrupt the formation of a category representation…
Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets
ERIC Educational Resources Information Center
Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus
2012-01-01
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…
ERIC Educational Resources Information Center
Patro, Katarzyna; Fischer, Ursula; Nuerk, Hans-Christoph; Cress, Ulrike
2016-01-01
Spatial processing of numbers has emerged as one of the basic properties of humans' mathematical thinking. However, how and when number-space relations develop is a highly contested issue. One dominant view has been that a link between numbers and left/right spatial directions is constructed based on directional experience associated with reading…
Lateral Entorhinal Cortex Lesions Impair Local Spatial Frameworks
Kuruvilla, Maneesh V.; Ainge, James A.
2017-01-01
A prominent theory in the neurobiology of memory processing is that episodic memory is supported by contextually gated spatial representations in the hippocampus formed by combining spatial information from medial entorhinal cortex (MEC) with non-spatial information from lateral entorhinal cortex (LEC). However, there is a growing body of evidence from lesion and single-unit recording studies in rodents suggesting that LEC might have a role in encoding space, particularly the current and previous locations of objects within the local environment. Landmarks, both local and global, have been shown to control the spatial representations hypothesized to underlie cognitive maps. Consequently, it has recently been suggested that information processing within this network might be organized with reference to spatial scale with LEC and MEC providing information about local and global spatial frameworks respectively. In the present study, we trained animals to search for food using either a local or global spatial framework. Animals were re-tested on both tasks after receiving excitotoxic lesions of either the MEC or LEC. LEC lesioned animals were impaired in their ability to learn a local spatial framework task. LEC lesioned animals were also impaired on an object recognition (OR) task involving multiple local features but unimpaired at recognizing a single familiar object. Together, this suggests that LEC is involved in associating features of the local environment. However, neither LEC nor MEC lesions impaired performance on the global spatial framework task. PMID:28567006
Spatial Data Transfer Standard (SDTS), part 3 : ISO 8211 encoding
DOT National Transportation Integrated Search
1997-11-20
The ISO 8211 encoding provides a representation of a Spatial Data Transfer Standard (SDTS) file set in a standardized method enabling the file set to be exported to or imported from different media by general purpose ISO 8211 software.
Müller-Lyer figures influence the online reorganization of visually guided grasping movements.
Heath, Matthew; Rival, Christina; Neely, Kristina; Krigolson, Olav
2006-03-01
In advance of grasping a visual object embedded within fins-in and fins-out Müller-Lyer (ML) configurations, participants formulated a premovement grip aperture (GA) based on the size of a neutral preview object. Preview objects were smaller, veridical, or larger than the size of the to-be-grasped target object. As a result, premovement GA associated with the small and large preview objects required significant online reorganization to appropriately grasp the target object. We reasoned that such a manipulation would provide an opportunity to examine the extent to which the visuomotor system engages egocentric and/or allocentric visual cues for the online, feedback-based control of action. It was found that the online reorganization of GA was reliably influenced by the ML figures (i.e., from 20 to 80% of movement time), regardless of the size of the preview object, albeit the small and large preview objects elicited more robust illusory effects than the veridical preview object. These results counter the view that online grasping control is mediated by absolute visual information computed with respect to the observer (e.g., Glover in Behav Brain Sci 27:3-78, 2004; Milner and Goodale in The visual brain in action 1995). Instead, the impact of the ML figures suggests a level of interaction between egocentric and allocentric visual cues in online action control.
Turgut, Nergiz; Miranda, Marcela; Kastrup, Andreas; Eling, Paul; Hildebrandt, Helmut
2018-06-01
Visuospatial neglect is a disabling syndrome resulting in impaired activities of daily living and in longer durations of inpatient rehabilitation. Effective interventions to remediate neglect are still needed. The combination of tDCS and an optokinetic task might qualify as a treatment method. A total of 32 post-acute patients with left (n = 20) or right-sided neglect were allotted to an intervention or a control group (both groups n = 16). The intervention group received eight sessions of 1.5-2.0 mA parietal transcranial direct current stimulation (tDCS) during the performance of an optokinetic task distributed over two weeks. Additionally they received standard therapy for five hours per day. The control group received only the standard therapy. Patients were examined twice before (with 3-4 days between examinations) and twice after treatment (5-6 days between examinations). Compared to the control group and controlling for spontaneous remission, the intervention group improved on spontaneous body orientation and the Clock Drawing Test. Intragroup comparisons showed broad improvements on egocentric but not on allocentric symptoms only for the intervention group. A short additional application of tDCS during an optokinetic task led to improvements of severe neglect compared to a standard neurological early rehabilitation treatment. Improvements seem to concern primarily egocentric rather than allocentric neglect.
Spatial-Operator Algebra For Robotic Manipulators
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Milman, Mark H.
1991-01-01
Report discusses spatial-operator algebra developed in recent studies of mathematical modeling, control, and design of trajectories of robotic manipulators. Provides succinct representation of mathematically complicated interactions among multiple joints and links of manipulator, thereby relieving analyst of most of tedium of detailed algebraic manipulations. Presents analytical formulation of spatial-operator algebra, describes some specific applications, summarizes current research, and discusses implementation of spatial-operator algebra in the Ada programming language.
Relationship among Environmental Pointing Accuracy, Mental Rotation, Sex, and Hormones
ERIC Educational Resources Information Center
Bell, Scott; Saucier, Deborah
2004-01-01
Humans rely on internal representations to solve a variety of spatial problems including navigation. Navigation employs specific information to compose a representation of space that is distinct from that obtained through static bird's-eye or horizontal perspectives. The ability to point to on-route locations, off-route locations, and the route…
Sex Differences in the Spatial Representation of Number
ERIC Educational Resources Information Center
Bull, Rebecca; Cleland, Alexandra A.; Mitchell, Thomas
2013-01-01
There is a large body of accumulated evidence from behavioral and neuroimaging studies regarding how and where in the brain we represent basic numerical information. A number of these studies have considered how numerical representations may differ between individuals according to their age or level of mathematical ability, but one issue rarely…
Bridging the Gap: Possible Roles and Contributions of Representational Momentum
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2006-01-01
Memory for the position of a moving target is often displaced in the direction of anticipated motion, and this has been referred to as "representational momentum". Such displacement might aid spatial localization by bridging the gap between perception and action, and might reflect a second-order isomorphism between subjective consequences of…
ERIC Educational Resources Information Center
Bergey, Bradley W.; Cromley, Jennifer G.; Newcombe, Nora S.
2015-01-01
There is growing evidence that targeted instruction can improve diagram comprehension, yet one of the skills identified in the diagram comprehension literature--coordinating multiple representations--has rarely been directly taught to students and tested as a classroom intervention. We created a Coordinating Multiple Representation (CMR)…
Graphic Display of Linguistic Information in English as a Foreign Language Reading
ERIC Educational Resources Information Center
Suzuki, Akio; Sato, Takeshi; Awazu, Shunji
2008-01-01
Two studies investigated the advantage and instructional effectiveness of the spatial graphic representation of an English sentence with coordinators over a linear sentential representation in English as a foreign language (EFL) reading settings. Experiment 1, Study 1, examined whether readers studying EFL could better comprehend the sentence--in…
First-Graders' Spatial-Mathematical Reasoning about Plane and Solid Shapes and Their Representations
ERIC Educational Resources Information Center
Hallowell, David A.; Okamoto, Yukari; Romo, Laura F.; La Joy, Jonna R.
2015-01-01
The primary goal of the study was to explore first-grade children's reasoning about plane and solid shapes across various kinds of geometric representations. Children were individually interviewed while completing a shape-matching task developed for this study. This task required children to compose and decompose geometric figures to identify…
The Role of Gesture in Supporting Mental Representations: The Case of Mental Abacus Arithmetic
ERIC Educational Resources Information Center
Brooks, Neon B.; Barner, David; Frank, Michael; Goldin-Meadow, Susan
2018-01-01
People frequently gesture when problem-solving, particularly on tasks that require spatial transformation. Gesture often facilitates task performance by interacting with internal mental representations, but how this process works is not well understood. We investigated this question by exploring the case of mental abacus (MA), a technique in which…
State-wide monitoring based on probability survey designs requires a spatially explicit representation of all streams and rivers of interest within a state, i.e., a sample frame. The sample frame should be the best available map representation of the resource. Many stream progr...
From innervation density to tactile acuity: 1. Spatial representation.
Brown, Paul B; Koerber, H Richard; Millecchia, Ronald
2004-06-11
We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.
Grid-cell representations in mental simulation
Bellmund, Jacob LS; Deuker, Lorena; Navarro Schröder, Tobias; Doeller, Christian F
2016-01-01
Anticipating the future is a key motif of the brain, possibly supported by mental simulation of upcoming events. Rodent single-cell recordings suggest the ability of spatially tuned cells to represent subsequent locations. Grid-like representations have been observed in the human entorhinal cortex during virtual and imagined navigation. However, hitherto it remains unknown if grid-like representations contribute to mental simulation in the absence of imagined movement. Participants imagined directions between building locations in a large-scale virtual-reality city while undergoing fMRI without re-exposure to the environment. Using multi-voxel pattern analysis, we provide evidence for representations of absolute imagined direction at a resolution of 30° in the parahippocampal gyrus, consistent with the head-direction system. Furthermore, we capitalize on the six-fold rotational symmetry of grid-cell firing to demonstrate a 60° periodic pattern-similarity structure in the entorhinal cortex. Our findings imply a role of the entorhinal grid-system in mental simulation and future thinking beyond spatial navigation. DOI: http://dx.doi.org/10.7554/eLife.17089.001 PMID:27572056
Langley, Keith; Anderson, Stephen J
2010-08-06
To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.
Building Bridges to Spatial Reasoning
ERIC Educational Resources Information Center
Shumway, Jessica F.
2013-01-01
Spatial reasoning, which involves "building and manipulating mental representations of two-and three-dimensional objects and perceiving an object from different perspectives" is a critical aspect of geometric thinking and reasoning. Through building, drawing, and analyzing two-and three-dimensional shapes, students develop a foundation…
Diverse Region-Based CNN for Hyperspectral Image Classification.
Zhang, Mengmeng; Li, Wei; Du, Qian
2018-06-01
Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.
The spatial representation of power in children.
Lu, Lifeng; Schubert, Thomas W; Zhu, Lei
2017-11-01
Previous evidence demonstrates that power is mentally represented as vertical space by adults. However, little is known about how power is mentally represented in children. The current research examines such representations. The influence of vertical information (motor cues) was tested in both an explicit power evaluation task (judge whether labels refer to powerless or powerful groups) and an incidental task (judge whether labels refer to people or animals). The results showed that when power was explicitly evaluated, vertical motor responses interfered with responding in children and adults, i.e., they responded to words representing powerful groups faster with the up than the down cursor key (and vice versa for powerless groups). However, this interference effect disappeared in the incidental task in children. The findings suggest that children have developed a spatial representation of power before they have been taught power-space associations formally, but that they do not judge power spontaneously.
Updating representations of learned scenes.
Finlay, Cory A; Motes, Michael A; Kozhevnikov, Maria
2007-05-01
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0 degrees -360 degrees in 36 degrees increments) around the scene, and participants judged whether the objects' positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.
Anosognosia as motivated unawareness: the 'defence' hypothesis revisited.
Turnbull, Oliver H; Fotopoulou, Aikaterini; Solms, Mark
2014-12-01
Anosognosia for hemiplegia has seen a century of almost continuous research, yet a definitive understanding of its mechanism remains elusive. Essentially, anosognosic patients hold quasi-delusional beliefs about their paralysed limbs, in spite of all the contrary evidence, repeated questioning, and logical argument. We review a range of findings suggesting that emotion and motivation play an important role in anosognosia. We conclude that anosognosia involves (amongst other things) a process of psychological defence. This conclusion stems from a wide variety of clinical and experimental investigations, including data on implicit awareness of deficit, fluctuations in awareness over time, and dramatic effects upon awareness of psychological interventions such as psychotherapy, reframing of the emotional consequences of the paralysis, and first versus third person perspectival manipulations. In addition, we review and refute the (eight) arguments historically raised against the 'defence' hypothesis, including the claim that a defence-based account cannot explain the lateralised nature of the disorder. We argue that damage to a well-established right-lateralised emotion regulation system, with links to psychological processes that appear to underpin allocentric spatial cognition, plays a key role in anosognosia (at least in some patients). We conclude with a discussion of implications for clinical practice. Copyright © 2014 Elsevier Ltd. All rights reserved.
Converging Modalities Ground Abstract Categories: The Case of Politics
Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.
2013-01-01
Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360
Converging modalities ground abstract categories: the case of politics.
Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R
2013-01-01
Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.
Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534
Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire
2012-01-01
The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.
On parts and holes: the spatial structure of the human body.
Donnelly, Maureen
2004-01-01
Spatial representation and reasoning is a central component of medical informatics. The spatial concepts most often used in medicine are not the quantitative, point-based concepts of classical geometry, but rather qualitative relations among extended objects such as body parts. A mereotopology is a formal theory of qualitative spatial relations, such as parthood and connection. This paper considers how an extension of mereotopology which includes also location relations can be used to represent and reason about the spatial structure of the human body.
The neural bases of spatial frequency processing during scene perception
Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole
2014-01-01
Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226
NASA Astrophysics Data System (ADS)
Setyarini, M.; Liliasari, Kadarohman, Asep; Martoprawiro, Muhamad A.
2016-02-01
This study aims at describing (1) students' level comprehension; (2) factors causing difficulties to 3D comprehend molecule representation and its interconversion on chirality. Data was collected using multiple-choice test consisting of eight questions. The participants were required to give answers along with their reasoning. The test was developed based on the indicators of concept comprehension. The study was conducted to 161 college students enrolled in stereochemistry topic in the odd semester (2014/2015) from two LPTK (teacher training institutes) in Bandar Lampung and Gorontalo, and one public university in Bandung. The result indicates that college students' level of comprehension towards 3D molecule representations and its inter-conversion was 5% on high level, 22 % on the moderate level, and 73 % on the low level. The dominant factors identified as the cause of difficulties to comprehend 3D molecule representation and its interconversion were (i) the lack of spatial awareness, (ii) violation of absolute configuration determination rules, (iii) imprecise placement of observers, (iv) the lack of rotation operation, and (v) the lack of understanding of correlation between the representations. This study recommends that learning show more rigorous spatial awareness training tasks accompanied using dynamic visualization media of molecules associated. Also students learned using static molecular models can help them overcome their difficulties encountered.
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades.
Boon, Paul J; Belopolsky, Artem V; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location.
Auditory peripersonal space in humans.
Farnè, Alessandro; Làdavas, Elisabetta
2002-10-01
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
NASA Astrophysics Data System (ADS)
Mateo, Cherry May R.; Yamazaki, Dai; Kim, Hyungjun; Champathong, Adisorn; Vaze, Jai; Oki, Taikan
2017-10-01
Global-scale river models (GRMs) are core tools for providing consistent estimates of global flood hazard, especially in data-scarce regions. Due to former limitations in computational power and input datasets, most GRMs have been developed to use simplified representations of flow physics and run at coarse spatial resolutions. With increasing computational power and improved datasets, the application of GRMs to finer resolutions is becoming a reality. To support development in this direction, the suitability of GRMs for application to finer resolutions needs to be assessed. This study investigates the impacts of spatial resolution and flow connectivity representation on the predictive capability of a GRM, CaMa-Flood, in simulating the 2011 extreme flood in Thailand. Analyses show that when single downstream connectivity (SDC) is assumed, simulation results deteriorate with finer spatial resolution; Nash-Sutcliffe efficiency coefficients decreased by more than 50 % between simulation results at 10 km resolution and 1 km resolution. When multiple downstream connectivity (MDC) is represented, simulation results slightly improve with finer spatial resolution. The SDC simulations result in excessive backflows on very flat floodplains due to the restrictive flow directions at finer resolutions. MDC channels attenuated these effects by maintaining flow connectivity and flow capacity between floodplains in varying spatial resolutions. While a regional-scale flood was chosen as a test case, these findings should be universal and may have significant impacts on large- to global-scale simulations, especially in regions where mega deltas exist.These results demonstrate that a GRM can be used for higher resolution simulations of large-scale floods, provided that MDC in rivers and floodplains is adequately represented in the model structure.
Changes In The Heating Degree-days In Norway Due Toglobal Warming
NASA Astrophysics Data System (ADS)
Skaugen, T. E.; Tveito, O. E.; Hanssen-Bauer, I.
A continuous spatial representation of temperature improves the possibility topro- duce maps of temperature-dependent variables. A temperature scenario for the period 2021-2050 is obtained for Norway from the Max-Planck-Institute? AOGCM, GSDIO ECHAM4/OPEC 3. This is done by an ?empirical downscaling method? which in- volves the use of empirical links between large-scale fields and local variables to de- duce estimates of the local variables. The analysis is obtained at forty-six sites in Norway. Spatial representation of the anomalies of temperature in the scenario period compared to the normal period (1961-1990) is obtained with the use of spatial interpo- lation in a GIS. The temperature scenario indicates that we will have a warmer climate in Norway in the future, especially during the winter season. The heating degree-days (HDD) is defined as the accumulated Celsius degrees be- tween the daily mean temperature and a threshold temperature. For Scandinavian countries, this threshold temperature is 17 Celsius degrees. The HDD is found to be a good estimate of accumulated cold. It is therefore a useful index for heating energy consumption within the heating season, and thus to power production planning. As a consequence of the increasing temperatures, the length of the heating season and the HDD within this season will decrease in Norway in the future. The calculations of the heating season and the HDD is estimated at grid level with the use of a GIS. The spatial representation of the heating season and the HDD can then easily be plotted. Local information of the variables being analysed can be withdrawn from the spatial grid in a GIS. The variable is prepared for further spatial analysis. It may also be used as an input to decision making systems.
Testing a Dynamic Field Account of Interactions between Spatial Attention and Spatial Working Memory
Johnson, Jeffrey S.; Spencer, John P.
2016-01-01
Studies examining the relationship between spatial attention and spatial working memory (SWM) have shown that discrimination responses are faster for targets appearing at locations that are being maintained in SWM, and that location memory is impaired when attention is withdrawn during the delay. These observations support the proposal that sustained attention is required for successful retention in SWM: if attention is withdrawn, memory representations are likely to fail, increasing errors. In the present study, this proposal is reexamined in light of a neural process model of SWM. On the basis of the model's functioning, we propose an alternative explanation for the observed decline in SWM performance when a secondary task is performed during retention: SWM representations drift systematically toward the location of targets appearing during the delay. To test this explanation, participants completed a color-discrimination task during the delay interval of a spatial recall task. In the critical shifting attention condition, the color stimulus could appear either toward or away from the memorized location relative to a midline reference axis. We hypothesized that if shifting attention during the delay leads to the failure of SWM representations, there should be an increase in the variance of recall errors but no change in directional error, regardless of the direction of the shift. Conversely, if shifting attention induces drift of SWM representations—as predicted by the model—there should be systematic changes in the pattern of spatial recall errors depending on the direction of the shift. Results were consistent with the latter possibility—recall errors were biased toward the location of discrimination targets appearing during the delay. PMID:26810574
A map of abstract relational knowledge in the human hippocampal–entorhinal cortex
Garvert, Mona M; Dolan, Raymond J; Behrens, Timothy EJ
2017-01-01
The hippocampal–entorhinal system encodes a map of space that guides spatial navigation. Goal-directed behaviour outside of spatial navigation similarly requires a representation of abstract forms of relational knowledge. This information relies on the same neural system, but it is not known whether the organisational principles governing continuous maps may extend to the implicit encoding of discrete, non-spatial graphs. Here, we show that the human hippocampal–entorhinal system can represent relationships between objects using a metric that depends on associative strength. We reconstruct a map-like knowledge structure directly from a hippocampal–entorhinal functional magnetic resonance imaging adaptation signal in a situation where relationships are non-spatial rather than spatial, discrete rather than continuous, and unavailable to conscious awareness. Notably, the measure that best predicted a behavioural signature of implicit knowledge and blood oxygen level-dependent adaptation was a weighted sum of future states, akin to the successor representation that has been proposed to account for place and grid-cell firing patterns. DOI: http://dx.doi.org/10.7554/eLife.17086.001 PMID:28448253
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
Neural Models of Spatial Orientation in Novel Environments
1994-01-01
tool use, the problem of self-organizing body -centered spatial representations for movement planning and spatial orientation, and the problem of...meeting of the American Association for the Advancement of Science, Boston, February, 1993. 23. Grossberg, S., annual Linnaeus Lecture, Uppsala...Congress on Neural Networks entitled --A self-organizing neural network for learning a body -centered invariant representa- tion of 3-D target
The Role of Cognitive Flexibility in the Spatial Representation of Children's Drawings
ERIC Educational Resources Information Center
Ebersbach, Mirjam; Hagedorn, Helena
2011-01-01
Representing the spatial appearance of objects and scenes in drawings is a difficult task for young children in particular. In the present study, the relationship between spatial drawing and cognitive flexibility was investigated. Seven- to 11-year-olds (N = 60) were asked to copy a three-dimensional model in a drawing. The use of depth cues as an…
Space in the brain: how the hippocampal formation supports spatial cognition
Hartley, Tom; Lever, Colin; Burgess, Neil; O'Keefe, John
2014-01-01
Over the past four decades, research has revealed that cells in the hippocampal formation provide an exquisitely detailed representation of an animal's current location and heading. These findings have provided the foundations for a growing understanding of the mechanisms of spatial cognition in mammals, including humans. We describe the key properties of the major categories of spatial cells: place cells, head direction cells, grid cells and boundary cells, each of which has a characteristic firing pattern that encodes spatial parameters relating to the animal's current position and orientation. These properties also include the theta oscillation, which appears to play a functional role in the representation and processing of spatial information. Reviewing recent work, we identify some themes of current research and introduce approaches to computational modelling that have helped to bridge the different levels of description at which these mechanisms have been investigated. These range from the level of molecular biology and genetics to the behaviour and brain activity of entire organisms. We argue that the neuroscience of spatial cognition is emerging as an exceptionally integrative field which provides an ideal test-bed for theories linking neural coding, learning, memory and cognition. PMID:24366125
Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L
2005-11-01
With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.
Motor and linguistic linking of space and time in the cerebellum.
Oliveri, Massimiliano; Bonnì, Sonia; Turriziani, Patrizia; Koch, Giacomo; Lo Gerfo, Emanuele; Torriero, Sara; Vicario, Carmelo Mario; Petrosini, Laura; Caltagirone, Carlo
2009-11-20
Recent literature documented the presence of spatial-temporal interactions in the human brain. The aim of the present study was to verify whether representation of past and future is also mapped onto spatial representations and whether the cerebellum may be a neural substrate for linking space and time in the linguistic domain. We asked whether processing of the tense of a verb is influenced by the space where response takes place and by the semantics of the verb. Responses to past tense were facilitated in the left space while responses to future tense were facilitated in the right space. Repetitive transcranial magnetic stimulation (rTMS) of the right cerebellum selectively slowed down responses to future tense of action verbs; rTMS of both cerebellar hemispheres decreased accuracy of responses to past tense in the left space and to future tense in the right space for non-verbs, and to future tense in the right space for state verbs. The results suggest that representation of past and future is mapped onto spatial formats and that motor action could represent the link between spatial and temporal dimensions. Right cerebellar, left motor brain networks could be part of the prospective brain, whose primary function is to use past experiences to anticipate future events. Both cerebellar hemispheres could play a role in establishing the grammatical rules for verb conjugation.
Local motion adaptation enhances the representation of spatial structure at EMD arrays
Lindemann, Jens P.; Egelhaaf, Martin
2017-01-01
Neuronal representation and extraction of spatial information are essential for behavioral control. For flying insects, a plausible way to gain spatial information is to exploit distance-dependent optic flow that is generated during translational self-motion. Optic flow is computed by arrays of local motion detectors retinotopically arranged in the second neuropile layer of the insect visual system. These motion detectors have adaptive response characteristics, i.e. their responses to motion with a constant or only slowly changing velocity decrease, while their sensitivity to rapid velocity changes is maintained or even increases. We analyzed by a modeling approach how motion adaptation affects signal representation at the output of arrays of motion detectors during simulated flight in artificial and natural 3D environments. We focused on translational flight, because spatial information is only contained in the optic flow induced by translational locomotion. Indeed, flies, bees and other insects segregate their flight into relatively long intersaccadic translational flight sections interspersed with brief and rapid saccadic turns, presumably to maximize periods of translation (80% of the flight). With a novel adaptive model of the insect visual motion pathway we could show that the motion detector responses to background structures of cluttered environments are largely attenuated as a consequence of motion adaptation, while responses to foreground objects stay constant or even increase. This conclusion even holds under the dynamic flight conditions of insects. PMID:29281631
3-D vision and figure-ground separation by visual cortex.
Grossberg, S
1994-01-01
A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with cortical mechanisms of spatial attention, attentive object learning, and visual search. Adaptive resonance theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal (IT) cortex for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular motion BCS signals interact with the model Where stream.(ABSTRACT TRUNCATED AT 400 WORDS)
NASA Astrophysics Data System (ADS)
Hangouët, J.-F.
2015-08-01
The many facets of what is encompassed by such an expression as "quality of spatial data" can be considered as a specific domain of reality worthy of formal description, i.e. of ontological abstraction. Various ontologies for data quality elements have already been proposed in literature. Today, the system of quality elements is most generally used and discussed according to the configuration exposed in the "data dictionary for data quality" of international standard ISO 19157. Our communication proposes an alternative view. This is founded on a perspective which focuses on the specificity of spatial data as a product: the representation perspective, where data in the computer are meant to show things of the geographic world and to be interpreted as such. The resulting ontology introduces new elements, the usefulness of which will be illustrated by orthoimagery examples.