Teijeiro, E J; Macías, R J; Morales, J M; Guerra, E; López, G; Alvarez, L M; Fernández, F; Maragoto, C; Seijo, F; Alvarez, E
The Neurosurgical Deep Recording System (NDRS) using a personal computer takes the place of complex electronic equipment for recording and processing deep cerebral electrical activity, as a guide in stereotaxic functional neurosurgery. It also permits increased possibilities of presenting information in direct graphic form with automatic management and sufficient flexibility to implement different analyses. This paper describes the possibilities of automatic simultaneous graphic representation in three almost orthogonal planes, available with the new 5.1 version of NDRS so as to facilitate the analysis of anatomophysiological correlation in the localization of deep structures of the brain during minimal access surgery. This new version can automatically show the spatial behaviour of signals registered throughout the path of the electrode inside the brain, superimposed simultaneously on sagittal, coronal and axial sections of an anatomical atlas of the brain, after adjusting the scale automatically according to the dimensions of the brain of each individual patient. This may also be shown in a tridimensional representation of the different planes themselves intercepting. The NDRS system has been successfully used in Spain and Cuba in over 300 functional neurosurgery operations. The new version further facilitates analysis of spatial anatomophysiological correlation for the localization of brain structures. This system has contributed to increase the precision and safety in selecting surgical targets in the control of Parkinson s disease and other disorders of movement.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
[Cognitive/affective processes, social interaction and social structure].
Cicourel, Aaron V
2012-01-01
Research on brain and structural analysis are overlapping but developed most often in independent ways. Here we consider biological mechanisms and environmental pressures for survival as simultaneously creating a gradual intersection of these various registers and changes in collaborative social interaction and communicative skills. We consider the ways humans have learned to characterize their brain life often depend on unexamined "representational redescriptions" that facilitate the depiction of practices.
Wehbe, Leila; Murphy, Brian; Talukdar, Partha; Fyshe, Alona; Ramdas, Aaditya; Mitchell, Tom
2014-01-01
Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.
Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience
Kriegeskorte, Nikolaus; Mur, Marieke; Bandettini, Peter
2008-01-01
A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g., single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement (e.g., fMRI and invasive or scalp electrophysiology), and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices (RDMs), which characterize the information carried by a given representation in a brain or model. Building on a rich psychological and mathematical literature on similarity analysis, we propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs. We demonstrate RSA by relating representations of visual objects as measured with fMRI in early visual cortex and the fusiform face area to computational models spanning a wide range of complexities. The RDMs are simultaneously related via second-level application of multidimensional scaling and tested using randomization and bootstrap techniques. We discuss the broad potential of RSA, including novel approaches to experimental design, and argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience. PMID:19104670
Multiple foci of spatial attention in multimodal working memory.
Katus, Tobias; Eimer, Martin
2016-11-15
The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.
Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M
2014-10-01
Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
Hosseinbor, A. Pasha; Chung, Moo K.; Koay, Cheng Guan; Schaefer, Stacey M.; van Reekum, Carien M.; Schmitz, Lara Peschke; Sutterer, Matt; Alexander, Andrew L.; Davidson, Richard J.
2015-01-01
Image-based parcellation of the brain often leads to multiple disconnected anatomical structures, which pose significant challenges for analyses of morphological shapes. Existing shape models, such as the widely used spherical harmonic (SPHARM) representation, assume topological invariance, so are unable to simultaneously parameterize multiple disjoint structures. In such a situation, SPHARM has to be applied separately to each individual structure. We present a novel surface parameterization technique using 4D hyperspherical harmonics in representing multiple disjoint objects as a single analytic function, terming it HyperSPHARM. The underlying idea behind Hyper-SPHARM is to stereographically project an entire collection of disjoint 3D objects onto the 4D hypersphere and subsequently simultaneously parameterize them with the 4D hyperspherical harmonics. Hence, HyperSPHARM allows for a holistic treatment of multiple disjoint objects, unlike SPHARM. In an imaging dataset of healthy adult human brains, we apply HyperSPHARM to the hippocampi and amygdalae. The HyperSPHARM representations are employed as a data smoothing technique, while the HyperSPHARM coefficients are utilized in a support vector machine setting for object classification. HyperSPHARM yields nearly identical results as SPHARM, as will be shown in the paper. Its key advantage over SPHARM lies computationally; Hyper-SPHARM possess greater computational efficiency than SPHARM because it can parameterize multiple disjoint structures using much fewer basis functions and stereographic projection obviates SPHARM's burdensome surface flattening. In addition, HyperSPHARM can handle any type of topology, unlike SPHARM, whose analysis is confined to topologically invariant structures. PMID:25828650
Emotional arousal amplifies the effects of biased competition in the brain
Lee, Tae-Ho; Sakaki, Michiko; Cheng, Ruth; Velasco, Ricardo
2014-01-01
The arousal-biased competition model predicts that arousal increases the gain on neural competition between stimuli representations. Thus, the model predicts that arousal simultaneously enhances processing of salient stimuli and impairs processing of relatively less-salient stimuli. We tested this model with a simple dot-probe task. On each trial, participants were simultaneously exposed to one face image as a salient cue stimulus and one place image as a non-salient stimulus. A border around the face cue location further increased its bottom-up saliency. Before these visual stimuli were shown, one of two tones played: one that predicted a shock (increasing arousal) or one that did not. An arousal-by-saliency interaction in category-specific brain regions (fusiform face area for salient faces and parahippocampal place area for non-salient places) indicated that brain activation associated with processing the salient stimulus was enhanced under arousal whereas activation associated with processing the non-salient stimulus was suppressed under arousal. This is the first functional magnetic resonance imaging study to demonstrate that arousal can enhance information processing for prioritized stimuli while simultaneously impairing processing of non-prioritized stimuli. Thus, it goes beyond previous research to show that arousal does not uniformly enhance perceptual processing, but instead does so selectively in ways that optimizes attention to highly salient stimuli. PMID:24532703
Konvalinka, Ivana; Roepstorff, Andreas
2012-01-01
Measuring brain activity simultaneously from two people interacting is intuitively appealing if one is interested in putative neural markers of social interaction. However, given the complex nature of interactions, it has proven difficult to carry out two-person brain imaging experiments in a methodologically feasible and conceptually relevant way. Only a small number of recent studies have put this into practice, using fMRI, EEG, or NIRS. Here, we review two main two-brain methodological approaches, each with two conceptual strategies. The first group has employed two-brain fMRI recordings, studying (1) turn-based interactions on the order of seconds, or (2) pseudo-interactive scenarios, where only one person is scanned at a time, investigating the flow of information between brains. The second group of studies has recorded dual EEG/NIRS from two people interacting, in (1) face-to-face turn-based interactions, investigating functional connectivity between theory-of-mind regions of interacting partners, or in (2) continuous mutual interactions on millisecond timescales, to measure coupling between the activity in one person's brain and the activity in the other's brain. We discuss the questions these approaches have addressed, and consider scenarios when simultaneous two-brain recordings are needed. Furthermore, we suggest that (1) quantification of inter-personal neural effects via measures of emergence, and (2) multivariate decoding models that generalize source-specific features of interaction, may provide novel tools to study brains in interaction. This may allow for a better understanding of social cognition as both representation and participation. PMID:22837744
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Representational geometry: integrating cognition, computation, and the brain
Kriegeskorte, Nikolaus; Kievit, Rogier A.
2013-01-01
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. PMID:23876494
Representational similarity of social and valence information in the medial pFC.
Chavez, Robert S; Heatherton, Todd F
2015-01-01
The human brain is remarkably adept at integrating complex information to form unified psychological representations of agents, objects, and events in the environment. Two domains in which this ability is particularly salient are the processing of social and valence information and are supported by common cortical areas in the medial pFC (MPFC). Because social information is often embedded within valenced emotional contexts, it is possible that activation patterns within the MPFC may represent both of these types of cognitive processes when presented simultaneously. The current study tested this possibility by employing a large-scale automated meta-analysis tool, together with multivoxel pattern analysis to investigate the representational similarity of social and valence information in the MPFC during fMRI. Using a representational similarity analysis, we found a high degree of representational similarity both within social dimensions and within valence dimensions, but not across them (e.g., positive social information was highly dissimilar to negative nonsocial information), in a ventral portion of the MPFC. These results were significantly correlated with a behaviorally measured similarity structure of the same stimuli, suggesting that a psychologically meaningful representation of social and valence information is reflected by multivoxel activation patterns in the ventral MPFC.
Representational geometry: integrating cognition, computation, and the brain.
Kriegeskorte, Nikolaus; Kievit, Rogier A
2013-08-01
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. Copyright © 2013 Elsevier Ltd. All rights reserved.
Decoding multiple sound categories in the human temporal cortex using high resolution fMRI.
Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C M
2015-01-01
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.
Drawing Connections Across Conceptually Related Visual Representations in Science
NASA Astrophysics Data System (ADS)
Hansen, Janice
This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those representations is most effective for learning? And, 3) Can children's ability to process conceptually related science diagrams be enhanced with added support? Three groups of participants, 89 pre-service teachers, 211 adult non-educators, and 385 middle school children, were surveyed about whether they felt related visual representations presented serially or simultaneously would lead to better learning outcomes. Two experiments, one with adults and one with child participants, explored the validity of these beliefs. Pre-service teachers did not endorse either serial or simultaneous related visual representations for their own learning. They were, however, significantly more likely to indicate that children would learn better from serially presented diagrams. In direct contrast to the educators, middle school students believed they would learn better from related visual representations presented simultaneously. Experimental data indicated that the beliefs adult non-educators held about their own learning needs matched learning outcomes. These participants endorsed simultaneous presentation of related diagrams for their own learning. When comparing learning from related diagrams presented simultaneously to learning from the same diagrams presented serially indicate that those in the simultaneously condition were able to create more complex mental models. A second experiment compared children's learning from related diagrams across four randomly-assigned conditions: serial, simultaneous, simultaneous with signaling, and simultaneous with structure mapping support. Providing middle school students with simultaneous related diagrams with support for structure mapping led to a lessened reliance on surface features, and a better understanding of the science concepts presented. These findings suggest that presenting diagrams serially in an effort to reduce cognitive load may not be preferable for learning if making connections across representations, and by extension across science concepts, is desired. Instead, providing simultaneous diagrams with structure mapping support may result in greater attention to the salient relationships between related visual representations as well as between the representations and the science concepts they depict.
[The brain and its representations in early modern Europe].
Mandressi, Rafael
2011-01-01
The history of the representations of the brain is broadly the history of the brain itself, since observations and ideas which concern it are closely linked, and are even depending on each other. These representations are images, but are also materials produced by manipulating, cutting, fixing the brain; they are also the descriptions of these objects. The interpretations, structured by the representations, ultimately organize the knowledge.
Khaligh-Razavi, Seyed-Mahdi; Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude
2018-06-07
Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG-fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.
Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys
Ifft, Peter J.; Shokur, Solaiman; Li, Zheng; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.
2014-01-01
Brain-machine interfaces (BMIs) are artificial systems that aim to restore sensation and movement to severely paralyzed patients. However, previous BMIs enabled only single arm functionality, and control of bimanual movements was a major challenge. Here, we developed and tested a bimanual BMI that enabled rhesus monkeys to control two avatar arms simultaneously. The bimanual BMI was based on the extracellular activity of 374–497 neurons recorded from several frontal and parietal cortical areas of both cerebral hemispheres. Cortical activity was transformed into movements of the two arms with a decoding algorithm called a 5th order unscented Kalman filter (UKF). The UKF is well-suited for BMI decoding because it accounts for both characteristics of reaching movements and their representation by cortical neurons. The UKF was trained either during a manual task performed with two joysticks or by having the monkeys passively observe the movements of avatar arms. Most cortical neurons changed their modulation patterns when both arms were engaged simultaneously. Representing the two arms jointly in a single UKF decoder resulted in improved decoding performance compared with using separate decoders for each arm. As the animals’ performance in bimanual BMI control improved over time, we observed widespread plasticity in frontal and parietal cortical areas. Neuronal representation of the avatar and reach targets was enhanced with learning, whereas pairwise correlations between neurons initially increased and then decreased. These results suggest that cortical networks may assimilate the two avatar arms through BMI control. PMID:24197735
Decoding Multiple Sound Categories in the Human Temporal Cortex Using High Resolution fMRI
Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C. M.
2015-01-01
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases. PMID:25692885
Two forms of touch perception in the human brain.
Spitoni, Grazia Fernanda; Galati, Gaspare; Antonucci, Gabriella; Haggard, Patrick; Pizzamiglio, Luigi
2010-12-01
We compared the judgment of distance between two simultaneous tactile stimuli applied to different body parts, with judgment of intensity of skin contact of the very same stimulation. Results on normal subjects showed that both tasks bilaterally activate parietal and frontal areas. However, the evaluation of distances on the body surface selectively activated the angular gyrus and the temporo-parieto-occipital junction in the right hemisphere. The different involvement of the brain areas in the two tactile tasks is interpreted as the need for using a Mental Body Representation (MBR) in the distance task, while the judgment of the intensity of skin deflection can be performed without the mediation of the MBR. The present study suggests that the cognitive processes underlying the two tasks are supported by partially different brain networks. In particular, our results show that metric spatial evaluation is lateralized to the right hemisphere.
Phase-amplitude coupling supports phase coding in human ECoG
Watrous, Andrew J; Deuker, Lorena; Fell, Juergen; Axmacher, Nikolai
2015-01-01
Prior studies have shown that high-frequency activity (HFA) is modulated by the phase of low-frequency activity. This phenomenon of phase-amplitude coupling (PAC) is often interpreted as reflecting phase coding of neural representations, although evidence for this link is still lacking in humans. Here, we show that PAC indeed supports phase-dependent stimulus representations for categories. Six patients with medication-resistant epilepsy viewed images of faces, tools, houses, and scenes during simultaneous acquisition of intracranial recordings. Analyzing 167 electrodes, we observed PAC at 43% of electrodes. Further inspection of PAC revealed that category specific HFA modulations occurred at different phases and frequencies of the underlying low-frequency rhythm, permitting decoding of categorical information using the phase at which HFA events occurred. These results provide evidence for categorical phase-coded neural representations and are the first to show that PAC coincides with phase-dependent coding in the human brain. DOI: http://dx.doi.org/10.7554/eLife.07886.001 PMID:26308582
Unique semantic space in the brain of each beholder predicts perceived similarity
Charest, Ian; Kievit, Rogier A.; Schmitz, Taylor W.; Deca, Diana; Kriegeskorte, Nikolaus
2014-01-01
The unique way in which each of us perceives the world must arise from our brain representations. If brain imaging could reveal an individual’s unique mental representation, it could help us understand the biological substrate of our individual experiential worlds in mental health and disease. However, imaging studies of object vision have focused on commonalities between individuals rather than individual differences and on category averages rather than representations of particular objects. Here we investigate the individually unique component of brain representations of particular objects with functional MRI (fMRI). Subjects were presented with unfamiliar and personally meaningful object images while we measured their brain activity on two separate days. We characterized the representational geometry by the dissimilarity matrix of activity patterns elicited by particular object images. The representational geometry remained stable across scanning days and was unique in each individual in early visual cortex and human inferior temporal cortex (hIT). The hIT representation predicted perceived similarity as reflected in dissimilarity judgments. Importantly, hIT predicted the individually unique component of the judgments when the objects were personally meaningful. Our results suggest that hIT brain representational idiosyncrasies accessible to fMRI are expressed in an individual's perceptual judgments. The unique way each of us perceives the world thus might reflect the individually unique representation in high-level visual areas. PMID:25246586
Richter, J
2000-09-01
The investigation of Lenin's brain by the German neurobiologist Oskar Vogt from Berlin and his Russian collaborators in Moscow is one of the most exciting and simultaneously oddest chapters in the history of medicine. With the bizarre claim to be able to detect the material substrate of genius it provoked as much unrealistic expectations in the public as strong criticism by the scientific community of brain researchers. The present paper deals in a brief survey with the history of collecting and measuring the brains of famous persons in general and particularly with the historical, political and social circumstances of the performed investigation of Lenin's brain. In this connection the epistemological and technical prerequisites of architectonical brain research and its means of the topographical representation of complex histo-anatomical and physiological differences in the brain cortex are shortly discussed. The opening of Russian archives after the socio-economic turn of the year 1991 brought up new background facts in Lenin's pathobiography; together with the sources from German archives a rather extensive reconstruction of the historical events between Lenin's death in 1924 and the final report of the Moscow Brain Research Institute (Institute Mozga) to the Politburo of the Russian Communist Party (Bolsheviki) in 1936 is possible now.
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Katus, Tobias; Müller, Matthias M; Eimer, Martin
2015-01-28
To adaptively guide ongoing behavior, representations in working memory (WM) often have to be modified in line with changing task demands. We used event-related potentials (ERPs) to demonstrate that tactile WM representations are stored in modality-specific cortical regions, that the goal-directed modulation of these representations is mediated through hemispheric-specific activation of somatosensory areas, and that the rehearsal of somatotopic coordinates in memory is accomplished by modality-specific spatial attention mechanisms. Participants encoded two tactile sample stimuli presented simultaneously to the left and right hands, before visual retro-cues indicated which of these stimuli had to be retained to be matched with a subsequent test stimulus on the same hand. Retro-cues triggered a sustained tactile contralateral delay activity component with a scalp topography over somatosensory cortex contralateral to the cued hand. Early somatosensory ERP components to task-irrelevant probe stimuli (that were presented after the retro-cues) and to subsequent test stimuli were enhanced when these stimuli appeared at the currently memorized location relative to other locations on the cued hand, demonstrating that a precise focus of spatial attention was established during the selective maintenance of tactile events in WM. These effects were observed regardless of whether participants performed the matching task with uncrossed or crossed hands, indicating that WM representations in this task were based on somatotopic rather than allocentric spatial coordinates. In conclusion, spatial rehearsal in tactile WM operates within somatotopically organized sensory brain areas that have been recruited for information storage. Copyright © 2015 Katus et al.
A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.
Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi
2015-12-01
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.
Category representations in the brain are both discretely localized and widely distributed.
Shehzad, Zarrar; McCarthy, Gregory
2018-06-01
Whether category information is discretely localized or represented widely in the brain remains a contentious issue. Initial functional MRI studies supported the localizationist perspective that category information is represented in discrete brain regions. More recent fMRI studies using machine learning pattern classification techniques provide evidence for widespread distributed representations. However, these latter studies have not typically accounted for shared information. Here, we find strong support for distributed representations when brain regions are considered separately. However, localized representations are revealed by using analytical methods that separate unique from shared information among brain regions. The distributed nature of shared information and the localized nature of unique information suggest that brain connectivity may encourage spreading of information but category-specific computations are carried out in distinct domain-specific regions. NEW & NOTEWORTHY Whether visual category information is localized in unique domain-specific brain regions or distributed in many domain-general brain regions is hotly contested. We resolve this debate by using multivariate analyses to parse functional MRI signals from different brain regions into unique and shared variance. Our findings support elements of both models and show information is initially localized and then shared among other regions leading to distributed representations being observed.
Deep and Structured Robust Information Theoretic Learning for Image Analysis.
Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai
2016-07-07
This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.
Dinkel, Philipp Johannes; Willmes, Klaus; Krinzinger, Helga; Konrad, Kerstin; Koten Jr, Jan Willem
2013-01-01
FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number representation in fronto-parietal cortex. We conclude that dyscalculic children show large individual differences in brain activation patterns. Nonetheless, the majority of dyscalculic children can be differentiated from controls employing brain activation patterns when appropriate methods are used. PMID:24349547
Dinkel, Philipp Johannes; Willmes, Klaus; Krinzinger, Helga; Konrad, Kerstin; Koten, Jan Willem
2013-01-01
FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number representation in fronto-parietal cortex. We conclude that dyscalculic children show large individual differences in brain activation patterns. Nonetheless, the majority of dyscalculic children can be differentiated from controls employing brain activation patterns when appropriate methods are used.
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
The duality of temporal encoding – the intrinsic and extrinsic representation of time
Golan, Ronen; Zakay, Dan
2015-01-01
While time is well acknowledged for having a fundamental part in our perception, questions on how it is represented are still matters of great debate. One of the main issues in question is whether time is represented intrinsically at the neural level, or is it represented within dedicated brain regions. We used an fMRI block design to test if we can impose covert encoding of temporal features of faces and natural scenes stimuli within category selective neural populations by exposing subjects to four types of temporal variance, ranging from 0% up to 50% variance. We found a gradual increase in neural activation associated with the gradual increase in temporal variance within category selective areas. A second level analysis showed the same pattern of activations within known brain regions associated with time representation, such as the Cerebellum, the Caudate, and the Thalamus. We concluded that temporal features are integral to perception and are simultaneously represented within category selective regions and globally within dedicated regions. Our second conclusion, drown from our covert procedure, is that time encoding, at its basic level, is an automated process that does not require attention allocated toward the temporal features nor does it require dedicated resources. PMID:26379604
Gallistel, C R
2017-12-01
The representation of discrete and continuous quantities appears to be ancient and pervasive in animal brains. Because numbers are the natural carriers of these representations, we may discover that in brains, it's numbers all the way down.
Vahdat, Shahabeddin; Lungu, Ovidiu; Cohen-Adad, Julien; Marchand-Pauvert, Veronique; Benali, Habib; Doyon, Julien
2015-06-01
The spinal cord participates in the execution of skilled movements by translating high-level cerebral motor representations into musculotopic commands. Yet, the extent to which motor skill acquisition relies on intrinsic spinal cord processes remains unknown. To date, attempts to address this question were limited by difficulties in separating spinal local effects from supraspinal influences through traditional electrophysiological and neuroimaging methods. Here, for the first time, we provide evidence for local learning-induced plasticity in intact human spinal cord through simultaneous functional magnetic resonance imaging of the brain and spinal cord during motor sequence learning. Specifically, we show learning-related modulation of activity in the C6-C8 spinal region, which is independent from that of related supraspinal sensorimotor structures. Moreover, a brain-spinal cord functional connectivity analysis demonstrates that the initial linear relationship between the spinal cord and sensorimotor cortex gradually fades away over the course of motor sequence learning, while the connectivity between spinal activity and cerebellum gains strength. These data suggest that the spinal cord not only constitutes an active functional component of the human motor learning network but also contributes distinctively from the brain to the learning process. The present findings open new avenues for rehabilitation of patients with spinal cord injuries, as they demonstrate that this part of the central nervous system is much more plastic than assumed before. Yet, the neurophysiological mechanisms underlying this intrinsic functional plasticity in the spinal cord warrant further investigations.
Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco
2015-10-15
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.
Functional brain networks reconstruction using group sparsity-regularized learning.
Zhao, Qinghua; Li, Will X Y; Jiang, Xi; Lv, Jinglei; Lu, Jianfeng; Liu, Tianming
2018-06-01
Investigating functional brain networks and patterns using sparse representation of fMRI data has received significant interests in the neuroimaging community. It has been reported that sparse representation is effective in reconstructing concurrent and interactive functional brain networks. To date, most of data-driven network reconstruction approaches rarely take consideration of anatomical structures, which are the substrate of brain function. Furthermore, it has been rarely explored whether structured sparse representation with anatomical guidance could facilitate functional networks reconstruction. To address this problem, in this paper, we propose to reconstruct brain networks utilizing the structure guided group sparse regression (S2GSR) in which 116 anatomical regions from the AAL template, as prior knowledge, are employed to guide the network reconstruction when performing sparse representation of whole-brain fMRI data. Specifically, we extract fMRI signals from standard space aligned with the AAL template. Then by learning a global over-complete dictionary, with the learned dictionary as a set of features (regressors), the group structured regression employs anatomical structures as group information to regress whole brain signals. Finally, the decomposition coefficients matrix is mapped back to the brain volume to represent functional brain networks and patterns. We use the publicly available Human Connectome Project (HCP) Q1 dataset as the test bed, and the experimental results indicate that the proposed anatomically guided structure sparse representation is effective in reconstructing concurrent functional brain networks.
Differences in the emergent coding properties of cortical and striatal ensembles
Ma, L.; Hyman, J.M.; Lindsay, A.J.; Phillips, A.G.; Seamans, J.K.
2016-01-01
The function of a given brain region is often defined by the coding properties of its individual neurons, yet how this information is combined at the ensemble level is an equally important consideration. In the present study, multiple neurons from the anterior cingulate cortex (ACC) and the dorsal striatum (DS) were recorded simultaneously as rats performed different sequences of the same three actions. Sequence and lever decoding was remarkably similar on a per-neuron basis in the two regions. At the ensemble level, sequence-specific representations in the DS appeared synchronously but transiently along with the representation of lever location, while these two streams of information appeared independently and asynchronously in the ACC. As a result the ACC achieved superior ensemble decoding accuracy overall. Thus, the manner in which information was combined across neurons in an ensemble determined the functional separation of the ACC and DS on this task. PMID:24974796
Arithmetic Memory Is Modality Specific.
Myers, Timothy; Szücs, Dénes
2015-01-01
In regards to numerical cognition and working memory, it is an open question as to whether numbers are stored into and retrieved from a central abstract representation or from separate notation-specific representations. This study seeks to help answer this by utilizing the numeral modality effect (NME) in three experiments to explore how numbers are processed by the human brain. The participants were presented with numbers (1-9) as either Arabic digits or written number words (Arabic digits and dot matrices in Experiment 2) at the first (S1) and second (S2) stimuli. The participant's task was to add the first two stimuli together and verify whether the answer (S3), presented simultaneously with S2, was correct. We hypothesized that if reaction time (RT) at S2/S3 depends on the modality of S1 then numbers are retrieved from modality specific memory stores. Indeed, RT depended on the modality of S1 whenever S2 was an Arabic digit which argues against the concept of numbers being stored and retrieved from a central, abstract representation.
Arithmetic Memory Is Modality Specific
Myers, Timothy; Szücs, Dénes
2015-01-01
In regards to numerical cognition and working memory, it is an open question as to whether numbers are stored into and retrieved from a central abstract representation or from separate notation-specific representations. This study seeks to help answer this by utilizing the numeral modality effect (NME) in three experiments to explore how numbers are processed by the human brain. The participants were presented with numbers (1–9) as either Arabic digits or written number words (Arabic digits and dot matrices in Experiment 2) at the first (S1) and second (S2) stimuli. The participant’s task was to add the first two stimuli together and verify whether the answer (S3), presented simultaneously with S2, was correct. We hypothesized that if reaction time (RT) at S2/S3 depends on the modality of S1 then numbers are retrieved from modality specific memory stores. Indeed, RT depended on the modality of S1 whenever S2 was an Arabic digit which argues against the concept of numbers being stored and retrieved from a central, abstract representation. PMID:26716692
Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen
2018-05-01
The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.
Ekstrom, Arne D.; Arnold, Aiden E. G. F.; Iaria, Giuseppe
2014-01-01
While the widely studied allocentric spatial representation holds a special status in neuroscience research, its exact nature and neural underpinnings continue to be the topic of debate, particularly in humans. Here, based on a review of human behavioral research, we argue that allocentric representations do not provide the kind of map-like, metric representation one might expect based on past theoretical work. Instead, we suggest that almost all tasks used in past studies involve a combination of egocentric and allocentric representation, complicating both the investigation of the cognitive basis of an allocentric representation and the task of identifying a brain region specifically dedicated to it. Indeed, as we discuss in detail, past studies suggest numerous brain regions important to allocentric spatial memory in addition to the hippocampus, including parahippocampal, retrosplenial, and prefrontal cortices. We thus argue that although allocentric computations will often require the hippocampus, particularly those involving extracting details across temporally specific routes, the hippocampus is not necessary for all allocentric computations. We instead suggest that a non-aggregate network process involving multiple interacting brain areas, including hippocampus and extra-hippocampal areas such as parahippocampal, retrosplenial, prefrontal, and parietal cortices, better characterizes the neural basis of spatial representation during navigation. According to this model, an allocentric representation does not emerge from the computations of a single brain region (i.e., hippocampus) nor is it readily decomposable into additive computations performed by separate brain regions. Instead, an allocentric representation emerges from computations partially shared across numerous interacting brain regions. We discuss our non-aggregate network model in light of existing data and provide several key predictions for future experiments. PMID:25346679
Zioga, Polina; Pollick, Frank; Ma, Minhua; Chapman, Paul; Stefanov, Kristian
2018-01-01
The fields of neural prosthetic technologies and Brain-Computer Interfaces (BCIs) have witnessed in the past 15 years an unprecedented development, bringing together theories and methods from different scientific fields, digital media, and the arts. More in particular, artists have been amongst the pioneers of the design of relevant applications since their emergence in the 1960s, pushing the boundaries of applications in real-life contexts. With the new research, advancements, and since 2007, the new low-cost commercial-grade wireless devices, there is a new increasing number of computer games, interactive installations, and performances that involve the use of these interfaces, combining scientific, and creative methodologies. The vast majority of these works use the brain-activity of a single participant. However, earlier, as well as recent examples, involve the simultaneous interaction of more than one participants or performers with the use of Electroencephalography (EEG)-based multi-brain BCIs. In this frame, we discuss and evaluate "Enheduanna-A Manifesto of Falling," a live brain-computer cinema performance that enables for the first time the simultaneous real-time multi-brain interaction of more than two participants, including a performer and members of the audience, using a passive EEG-based BCI system in the context of a mixed-media performance. The performance was realised as a neuroscientific study conducted in a real-life setting. The raw EEG data of seven participants, one performer and two different members of the audience for each performance, were simultaneously recorded during three live events. The results reveal that the majority of the participants were able to successfully identify whether their brain-activity was interacting with the live video projections or not. A correlation has been found between their answers to the questionnaires, the elements of the performance that they identified as most special, and the audience's indicators of attention and emotional engagement. Also, the results obtained from the performer's data analysis are consistent with the recall of working memory representations and the increase of cognitive load. Thus, these results prove the efficiency of the interaction design, as well as the importance of the directing strategy, dramaturgy and narrative structure on the audience's perception, cognitive state, and engagement.
Zioga, Polina; Pollick, Frank; Ma, Minhua; Chapman, Paul; Stefanov, Kristian
2018-01-01
The fields of neural prosthetic technologies and Brain-Computer Interfaces (BCIs) have witnessed in the past 15 years an unprecedented development, bringing together theories and methods from different scientific fields, digital media, and the arts. More in particular, artists have been amongst the pioneers of the design of relevant applications since their emergence in the 1960s, pushing the boundaries of applications in real-life contexts. With the new research, advancements, and since 2007, the new low-cost commercial-grade wireless devices, there is a new increasing number of computer games, interactive installations, and performances that involve the use of these interfaces, combining scientific, and creative methodologies. The vast majority of these works use the brain-activity of a single participant. However, earlier, as well as recent examples, involve the simultaneous interaction of more than one participants or performers with the use of Electroencephalography (EEG)-based multi-brain BCIs. In this frame, we discuss and evaluate “Enheduanna—A Manifesto of Falling,” a live brain-computer cinema performance that enables for the first time the simultaneous real-time multi-brain interaction of more than two participants, including a performer and members of the audience, using a passive EEG-based BCI system in the context of a mixed-media performance. The performance was realised as a neuroscientific study conducted in a real-life setting. The raw EEG data of seven participants, one performer and two different members of the audience for each performance, were simultaneously recorded during three live events. The results reveal that the majority of the participants were able to successfully identify whether their brain-activity was interacting with the live video projections or not. A correlation has been found between their answers to the questionnaires, the elements of the performance that they identified as most special, and the audience's indicators of attention and emotional engagement. Also, the results obtained from the performer's data analysis are consistent with the recall of working memory representations and the increase of cognitive load. Thus, these results prove the efficiency of the interaction design, as well as the importance of the directing strategy, dramaturgy and narrative structure on the audience's perception, cognitive state, and engagement. PMID:29666566
Christophel, Thomas B; Allefeld, Carsten; Endisch, Christian; Haynes, John-Dylan
2018-06-01
Traditional views of visual working memory postulate that memorized contents are stored in dorsolateral prefrontal cortex using an adaptive and flexible code. In contrast, recent studies proposed that contents are maintained by posterior brain areas using codes akin to perceptual representations. An important question is whether this reflects a difference in the level of abstraction between posterior and prefrontal representations. Here, we investigated whether neural representations of visual working memory contents are view-independent, as indicated by rotation-invariance. Using functional magnetic resonance imaging and multivariate pattern analyses, we show that when subjects memorize complex shapes, both posterior and frontal brain regions maintain the memorized contents using a rotation-invariant code. Importantly, we found the representations in frontal cortex to be localized to the frontal eye fields rather than dorsolateral prefrontal cortices. Thus, our results give evidence for the view-independent storage of complex shapes in distributed representations across posterior and frontal brain regions.
Statistical Analyses of Brain Surfaces Using Gaussian Random Fields on 2-D Manifolds
Staib, Lawrence H.; Xu, Dongrong; Zhu, Hongtu; Peterson, Bradley S.
2008-01-01
Interest in the morphometric analysis of the brain and its subregions has recently intensified because growth or degeneration of the brain in health or illness affects not only the volume but also the shape of cortical and subcortical brain regions, and new image processing techniques permit detection of small and highly localized perturbations in shape or localized volume, with remarkable precision. An appropriate statistical representation of the shape of a brain region is essential, however, for detecting, localizing, and interpreting variability in its surface contour and for identifying differences in volume of the underlying tissue that produce that variability across individuals and groups of individuals. Our statistical representation of the shape of a brain region is defined by a reference region for that region and by a Gaussian random field (GRF) that is defined across the entire surface of the region. We first select a reference region from a set of segmented brain images of healthy individuals. The GRF is then estimated as the signed Euclidean distances between points on the surface of the reference region and the corresponding points on the corresponding region in images of brains that have been coregistered to the reference. Correspondences between points on these surfaces are defined through deformations of each region of a brain into the coordinate space of the reference region using the principles of fluid dynamics. The warped, coregistered region of each subject is then unwarped into its native space, simultaneously bringing into that space the map of corresponding points that was established when the surfaces of the subject and reference regions were tightly coregistered. The proposed statistical description of the shape of surface contours makes no assumptions, other than smoothness, about the shape of the region or its GRF. The description also allows for the detection and localization of statistically significant differences in the shapes of the surfaces across groups of subjects at both a fine and coarse scale. We demonstrate the effectiveness of these statistical methods by applying them to study differences in shape of the amygdala and hippocampus in a large sample of normal subjects and in subjects with attention deficit/hyperactivity disorder (ADHD). PMID:17243583
Dresp-Langley, Birgitta
2011-01-01
Scientific studies have shown that non-conscious stimuli and representations influence information processing during conscious experience. In the light of such evidence, questions about potential functional links between non-conscious brain representations and conscious experience arise. This article discusses neural model capable of explaining how statistical learning mechanisms in dedicated resonant circuits could generate specific temporal activity traces of non-conscious representations in the brain. How reentrant signaling, top-down matching, and statistical coincidence of such activity traces may lead to the progressive consolidation of temporal patterns that constitute the neural signatures of conscious experience in networks extending across large distances beyond functionally specialized brain regions is then explained. PMID:24962683
Roy, Asim
2017-01-01
The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings - in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system.
Roy, Asim
2017-01-01
The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings – in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system. PMID:28261127
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
Decoding the Semantic Content of Natural Movies from Human Brain Activity
Huth, Alexander G.; Lee, Tyler; Nishimoto, Shinji; Bilenko, Natalia Y.; Vu, An T.; Gallant, Jack L.
2016-01-01
One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI. PMID:27781035
A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes
Just, Marcel Adam; Cherkassky, Vladimir L.; Aryal, Sandesh; Mitchell, Tom M.
2010-01-01
This article describes the discovery of a set of biologically-driven semantic dimensions underlying the neural representation of concrete nouns, and then demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns. We use factor analysis of fMRI brain imaging data to reveal the biological representation of individual concrete nouns like apple, in the absence of any pictorial stimuli. From this analysis emerge three main semantic factors underpinning the neural representation of nouns naming physical objects, which we label manipulation, shelter, and eating. Each factor is neurally represented in 3–4 different brain locations that correspond to a cortical network that co-activates in non-linguistic tasks, such as tool use pantomime for the manipulation factor. Several converging methods, such as the use of behavioral ratings of word meaning and text corpus characteristics, provide independent evidence of the centrality of these factors to the representations. The factors are then used with machine learning classifier techniques to show that the fMRI-measured brain representation of an individual concrete noun like apple can be identified with good accuracy from among 60 candidate words, using only the fMRI activity in the 16 locations associated with these factors. To further demonstrate the generativity of the proposed account, a theory-based model is developed to predict the brain activation patterns for words to which the algorithm has not been previously exposed. The methods, findings, and theory constitute a new approach of using brain activity for understanding how object concepts are represented in the mind. PMID:20084104
A neurosemantic theory of concrete noun representation based on the underlying brain codes.
Just, Marcel Adam; Cherkassky, Vladimir L; Aryal, Sandesh; Mitchell, Tom M
2010-01-13
This article describes the discovery of a set of biologically-driven semantic dimensions underlying the neural representation of concrete nouns, and then demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns. We use factor analysis of fMRI brain imaging data to reveal the biological representation of individual concrete nouns like apple, in the absence of any pictorial stimuli. From this analysis emerge three main semantic factors underpinning the neural representation of nouns naming physical objects, which we label manipulation, shelter, and eating. Each factor is neurally represented in 3-4 different brain locations that correspond to a cortical network that co-activates in non-linguistic tasks, such as tool use pantomime for the manipulation factor. Several converging methods, such as the use of behavioral ratings of word meaning and text corpus characteristics, provide independent evidence of the centrality of these factors to the representations. The factors are then used with machine learning classifier techniques to show that the fMRI-measured brain representation of an individual concrete noun like apple can be identified with good accuracy from among 60 candidate words, using only the fMRI activity in the 16 locations associated with these factors. To further demonstrate the generativity of the proposed account, a theory-based model is developed to predict the brain activation patterns for words to which the algorithm has not been previously exposed. The methods, findings, and theory constitute a new approach of using brain activity for understanding how object concepts are represented in the mind.
Brain Representations of Basic Physics Concepts
NASA Astrophysics Data System (ADS)
Just, Marcel Adam
2017-09-01
The findings concerning physics concepts build on the remarkable new ability to determine the neural signature (or activation pattern) corresponding to an individual concept using fMRI brain imaging. Moreover, the neural signatures can be decomposed into meaningful underlying dimensions, identifying the individual, interpretable components of the neural representation of a concept. The investigation of physics concepts representations reveals how relatively recent physics concepts (formalized only in the last few centuries) are stored in the millenia-old information system of the human brain.
Switch-Independent Task Representations in Frontal and Parietal Cortex.
Loose, Lasse S; Wisniewski, David; Rusconi, Marco; Goschke, Thomas; Haynes, John-Dylan
2017-08-16
Alternating between two tasks is effortful and impairs performance. Previous fMRI studies have found increased activity in frontoparietal cortex when task switching is required. One possibility is that the additional control demands for switch trials are met by strengthening task representations in the human brain. Alternatively, on switch trials, the residual representation of the previous task might impede the buildup of a neural task representation. This would predict weaker task representations on switch trials, thus also explaining the performance costs. To test this, male and female participants were cued to perform one of two similar tasks, with the task being repeated or switched between successive trials. Multivoxel pattern analysis was used to test which regions encode the tasks and whether this encoding differs between switch and repeat trials. As expected, we found information about task representations in frontal and parietal cortex, but there was no difference in the decoding accuracy of task-related information between switch and repeat trials. Using cross-classification, we found that the frontoparietal cortex encodes tasks using a generalizable spatial pattern in switch and repeat trials. Therefore, task representations in frontal and parietal cortex are largely switch independent. We found no evidence that neural information about task representations in these regions can explain behavioral costs usually associated with task switching. SIGNIFICANCE STATEMENT Alternating between two tasks is effortful and slows down performance. One possible explanation is that the representations in the human brain need time to build up and are thus weaker on switch trials, explaining performance costs. Alternatively, task representations might even be enhanced to overcome the previous task. Here, we used a combination of fMRI and a brain classifier to test whether the additional control demands under switching conditions lead to an increased or decreased strength of task representations in frontoparietal brain regions. We found that task representations are not modulated significantly by switching processes and generalize across switching conditions. Therefore, task representations in the human brain cannot account for the performance costs associated with alternating between tasks. Copyright © 2017 the authors 0270-6474/17/378033-10$15.00/0.
A Cross-Talk between Brain-Damage Patients and Infants on Action and Language
ERIC Educational Resources Information Center
Papeo, Liuba; Hochmann, Jean-Remy
2012-01-01
Sensorimotor representations in the brain encode the sensory and motor aspects of one's own bodily activity. It is highly debated whether sensorimotor representations are the core basis for the representation of action-related knowledge and, in particular, action words, such as verbs. In this review, we will address this question by bringing to…
Towler, John; Kelly, Maria; Eimer, Martin
2016-06-01
The capacity of visual working memory for faces is extremely limited, but the reasons for these limitations remain unknown. We employed event-related brain potential measures to demonstrate that individual faces have to be focally attended in order to be maintained in working memory, and that attention is allocated to only a single face at a time. When 2 faces have to be memorized simultaneously in a face identity-matching task, the focus of spatial attention during encoding predicts which of these faces can be successfully maintained in working memory and matched to a subsequent test face. We also show that memory representations of attended faces are maintained in a position-dependent fashion. These findings demonstrate that the limited capacity of face memory is directly linked to capacity limits of spatial attention during the encoding and maintenance of individual face representations. We suggest that the capacity and distribution of selective spatial attention is a dynamic resource that constrains the capacity and fidelity of working memory for faces. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Leikin, Mark; Waisman, Ilana; Shaul, Shelley; Leikin, Roza
2014-03-01
This paper presents a small part of a larger interdisciplinary study that investigates brain activity (using event related potential methodology) of male adolescents when solving mathematical problems of different types. The study design links mathematics education research with neurocognitive studies. In this paper we performed a comparative analysis of brain activity associated with the translation from visual to symbolic representations of mathematical objects in algebra and geometry. Algebraic tasks require translation from graphical to symbolic representation of a function, whereas tasks in geometry require translation from a drawing of a geometric figure to a symbolic representation of its property. The findings demonstrate that electrical activity associated with the performance of geometrical tasks is stronger than that associated with solving algebraic tasks. Additionally, we found different scalp topography of the brain activity associated with algebraic and geometric tasks. Based on these results, we argue that problem solving in algebra and geometry is associated with different patterns of brain activity.
Kusano, Toshiki; Kurashige, Hiroki; Nambu, Isao; Moriguchi, Yoshiya; Hanakawa, Takashi; Wada, Yasuhiro; Osu, Rieko
2015-08-01
It has been suggested that resting-state brain activity reflects task-induced brain activity patterns. In this study, we examined whether neural representations of specific movements can be observed in the resting-state brain activity patterns of motor areas. First, we defined two regions of interest (ROIs) to examine brain activity associated with two different behavioral tasks. Using multi-voxel pattern analysis with regularized logistic regression, we designed a decoder to detect voxel-level neural representations corresponding to the tasks in each ROI. Next, we applied the decoder to resting-state brain activity. We found that the decoder discriminated resting-state neural activity with accuracy comparable to that associated with task-induced neural activity. The distribution of learned weighted parameters for each ROI was similar for resting-state and task-induced activities. Large weighted parameters were mainly located on conjunctive areas. Moreover, the accuracy of detection was higher than that for a decoder whose weights were randomly shuffled, indicating that the resting-state brain activity includes multi-voxel patterns similar to the neural representation for the tasks. Therefore, these results suggest that the neural representation of resting-state brain activity is more finely organized and more complex than conventionally considered.
Cortical Reorganization in Dual Innervation by Single Peripheral Nerve.
Zheng, Mou-Xiong; Shen, Yun-Dong; Hua, Xu-Yun; Hou, Ao-Lin; Zhu, Yi; Xu, Wen-Dong
2017-09-21
Functional recovery after peripheral nerve injury and repair is related with cortical reorganization. However, the mechanism of innervating dual targets by 1 donor nerve is largely unknown. To investigate the cortical reorganization when the phrenic nerve simultaneously innervates the diaphragm and biceps. Total brachial plexus (C5-T1) injury rats were repaired by phrenic nerve-musculocutaneous nerve transfer with end-to-side (n = 15) or end-to-end (n = 15) neurorrhaphy. Brachial plexus avulsion (n = 5) and sham surgery (n = 5) rats were included for control. Behavioral observation, electromyography, and histologic studies were used for confirming peripheral nerve reinnervation. Cortical representations of the diaphragm and reinnervated biceps were studied by intracortical microstimulation techniques before and at months 0.5, 3, 5, 7, and 10 after surgery. At month 0.5 after complete brachial plexus injury, the motor representation of the injured forelimb disappeared. The diaphragm representation was preserved in the "end-to-side" group but absent in the "end-to-end" group. Rhythmic contraction of biceps appeared in "end-to-end" and "end-to-side" groups, and the biceps representation reappeared in the original biceps and diaphragm areas at months 3 and 5. At month 10, it was completely located in the original biceps area in the "end-to-end" group. Part of the biceps representation remained in the original diaphragm area in the "end-to-side" group. Destroying the contralateral motor cortex did not eliminate respiration-related contraction of biceps. The brain tends to resume biceps representation from the original diaphragm area to the original biceps area following phrenic nerve transfer. The original diaphragm area partly preserves reinnervated biceps representation after end-to-side transfer. Copyright © 2017 by the Congress of Neurological Surgeons
Cortico-muscular coherence on artifact corrected EEG-EMG data recorded with a MRI scanner.
Muthuraman, M; Galka, A; Hong, V N; Heute, U; Deuschl, G; Raethjen, J
2013-01-01
Simultaneous recording of electroencephalogram (EEG) and electromyogram (EMG) with magnetic resonance imaging (MRI) provides great potential for studying human brain activity with high temporal and spatial resolution. But, due to the MRI, the recorded signals are contaminated with artifacts. The correction of these artifacts is important to use these signals for further spectral analysis. The coherence can reveal the cortical representation of peripheral muscle signal in particular motor tasks, e.g. finger movements. The artifact correction of these signals was done by two different algorithms the Brain vision analyzer (BVA) and the Matlab FMRIB plug-in for EEGLAB. The Welch periodogram method was used for estimating the cortico-muscular coherence. Our analysis revealed coherence with a frequency of 5Hz in the contralateral side of the brain. The entropy is estimated for the calculated coherence to get the distribution of coherence in the scalp. The significance of the paper is to identify the optimal algorithm to rectify the MR artifacts and as a first step to use both these signals EEG and EMG in conjunction with MRI for further studies.
Sensitivity to musical structure in the human brain
McDermott, Josh H.; Norman-Haignere, Sam; Kanwisher, Nancy
2012-01-01
Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes. PMID:23019005
Emerging Object Representations in the Visual System Predict Reaction Times for Categorization
Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.
2015-01-01
Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634
Conceptual knowledge representation: A cross-section of current research.
Rogers, Timothy T; Wolmetz, Michael
2016-01-01
How is conceptual knowledge encoded in the brain? This special issue of Cognitive Neuropsychology takes stock of current efforts to answer this question through a variety of methods and perspectives. Across this work, three questions recur, each fundamental to knowledge representation in the mind and brain. First, what are the elements of conceptual representation? Second, to what extent are conceptual representations embodied in sensory and motor systems? Third, how are conceptual representations shaped by context, especially linguistic context? In this introductory article we provide relevant background on these themes and introduce how they are addressed by our contributing authors.
Representational Distance Learning for Deep Neural Networks
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889
Representational Distance Learning for Deep Neural Networks.
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-01-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703
Development of common neural representations for distinct numerical problems
Chang, Ting-Ting; Rosenberg-Lee, Miriam; Metcalfe, Arron W. S.; Chen, Tianwen; Menon, Vinod
2015-01-01
How the brain develops representations for abstract cognitive problems is a major unaddressed question in neuroscience. Here we tackle this fundamental question using arithmetic problem solving, a cognitive domain important for the development of mathematical reasoning. We first examined whether adults demonstrate common neural representations for addition and subtraction problems, two complementary arithmetic operations that manipulate the same quantities. We then examined how the common neural representations for the two problem types change with development. Whole-brain multivoxel representational similarity (MRS) analysis was conducted to examine common coding of addition and subtraction problems in children and adults. We found that adults exhibited significant levels of MRS between the two problem types, not only in the intra-parietal sulcus (IPS) region of the posterior parietal cortex (PPC), but also in ventral temporal-occipital, anterior temporal and dorsolateral prefrontal cortices. Relative to adults, children showed significantly reduced levels of MRS in these same regions. In contrast, no brain areas showed significantly greater MRS between problem types in children. Our findings provide novel evidence that the emergence of arithmetic problem solving skills from childhood to adulthood is characterized by maturation of common neural representations between distinct numerical operations, and involve distributed brain regions important for representing and manipulating numerical quantity. More broadly, our findings demonstrate that representational analysis provides a powerful approach for uncovering fundamental mechanisms by which children develop proficiencies that are a hallmark of human cognition. PMID:26160287
A denoising algorithm for CT image using low-rank sparse coding
NASA Astrophysics Data System (ADS)
Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng
2018-03-01
We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.
Body representation in patients after vascular brain injuries.
Razmus, Magdalena
2017-11-01
Neuropsychological literature suggests that body representation is a multidimensional concept consisting of various types of representations. Previous studies have demonstrated dissociations between three types of body representation specified by the kind of data and processes, i.e. body schema, body structural description, and body semantics. The aim of the study was to describe the state of body representation in patients after vascular brain injuries and to provide evidence for the different types of body representation. The question about correlations between body representation deficits and neuropsychological dysfunctions was also investigated. Fifty patients after strokes and 50 control individuals participated in the study. They were examined with tasks referring to dynamic representation of body parts positions, topological body map, and lexical and semantic knowledge about the body. Data analysis showed that vascular brain injuries result in deficits of body representation, which may co-occur with cognitive dysfunctions, but the latter are a possible risk factor for body representation deficits rather than sufficient or imperative requisites for them. The study suggests that types of body representation may be separated on the basis not only of their content, but also of their relation with self. Principal component analysis revealed three factors, which explained over 66% of results variance. The factors, which may be interpreted as types or dimensions of mental model of a body, represent different degrees of connection with self. The results indicate another possibility of body representation types classification, which should be verified in future research.
Holding multiple items in short term memory: a neural mechanism.
Rolls, Edmund T; Dempere-Marco, Laura; Deco, Gustavo
2013-01-01
Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging.
Holding Multiple Items in Short Term Memory: A Neural Mechanism
Rolls, Edmund T.; Dempere-Marco, Laura; Deco, Gustavo
2013-01-01
Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging. PMID:23613789
The “Visual Shock” of Francis Bacon: an essay in neuroesthetics
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812
The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.
Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-06-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Decoding the dynamic representation of musical pitch from human brain activity.
Sankaran, N; Thompson, W F; Carlile, S; Carlson, T A
2018-01-16
In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to "decode" the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain's representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy, whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.
Spatial Hyperschematia without Spatial Neglect after Insulo-Thalamic Disconnection
Saj, Arnaud; Wilcke, Juliane C.; Gschwind, Markus; Emond, Héloïse; Assal, Frédéric
2013-01-01
Different spatial representations are not stored as a single multipurpose map in the brain. Right brain-damaged patients can show a distortion, a compression of peripersonal and extrapersonal space. Here we report the case of a patient with a right insulo-thalamic disconnection without spatial neglect. The patient, compared with 10 healthy control subjects, showed a constant and reliable increase of her peripersonal and extrapersonal egocentric space representations - that we named spatial hyperschematia - yet left her allocentric space representations intact. This striking dissociation shows that our interactions with the surrounding world are represented and processed modularly in the human brain, depending on their frame of reference. PMID:24302992
O'Connor, Cliodhna; Joffe, Helene
2013-11-01
The public profile of neurodevelopmental research has expanded in recent years. This paper applies social representations theory to explore how early brain development was represented in the UK print media in the first decade of the 21st century. A thematic analysis was performed on 505 newspaper articles published between 2000 and 2010 that discussed early brain development. Media coverage centred around concern with 'protecting' the prenatal brain (identifying threats to foetal neurodevelopment), 'feeding' the infant brain (indicating the patterns of nutrition that enhance brain development) and 'loving' the young child's brain (elucidating the developmental significance of emotionally nurturing family environments). The media focused almost exclusively on the role of parental action in promoting optimal neurodevelopment, rarely acknowledging wider structural, cultural or political means of supporting child development. The significance of parental care was intensified by deterministic interpretations of critical periods, which implied that inappropriate parental input would produce profound and enduring neurobiological impairments. Neurodevelopmental research was also used to promulgate normative judgements concerning the acceptability of certain gender roles and family contexts. The paper argues that media representations of neurodevelopment stress parental responsibility for shaping a child's future while relegating the contributions of genetic or wider societal factors, and examines the consequences of these representations for society and family life. Copyright © 2012 Elsevier Ltd. All rights reserved.
An Integrated Self-Aware Cognitive Architecture
2008-03-01
human-like cognitive growth. Our approach is inspired by studies of the human brain -mind: in particular, by theoretical models of representations of...agency in the higher associative human brain areas. This feature (a theory of mind including representations of one’s self) allows the system to...self-aware cognition that we believe is necessary for human-like cognitive growth. Our approach is inspired by studies of the human brain -mind: in
A computational model of the human visual cortex
NASA Astrophysics Data System (ADS)
Albus, James S.
2008-04-01
The brain is first and foremost a control system that is capable of building an internal representation of the external world, and using this representation to make decisions, set goals and priorities, formulate plans, and control behavior with intent to achieve its goals. The computational model proposed here assumes that this internal representation resides in arrays of cortical columns. More specifically, it models each cortical hypercolumn together with its underlying thalamic nuclei as a Fundamental Computational Unit (FCU) consisting of a frame-like data structure (containing attributes and pointers) plus the computational processes and mechanisms required to maintain it. In sensory-processing areas of the brain, FCUs enable segmentation, grouping, and classification. Pointers stored in FCU frames link pixels and signals to objects and events in situations and episodes that are overlaid with meaning and emotional values. In behavior-generating areas of the brain, FCUs make decisions, set goals and priorities, generate plans, and control behavior. Pointers are used to define rules, grammars, procedures, plans, and behaviors. It is suggested that it may be possible to reverse engineer the human brain at the FCU level of fidelity using nextgeneration massively parallel computer hardware and software. Key Words: computational modeling, human cortex, brain modeling, reverse engineering the brain, image processing, perception, segmentation, knowledge representation
Selective entrainment of brain oscillations drives auditory perceptual organization.
Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles
2017-10-01
Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.
MR Vascular Fingerprinting in Stroke and Brain Tumors Models
NASA Astrophysics Data System (ADS)
Lemasson, B.; Pannetier, N.; Coquery, N.; Boisserand, Ligia S. B.; Collomb, Nora; Schuff, N.; Moseley, M.; Zaharchuk, G.; Barbier, E. L.; Christen, T.
2016-11-01
In this study, we evaluated an MRI fingerprinting approach (MRvF) designed to provide high-resolution parametric maps of the microvascular architecture (i.e., blood volume fraction, vessel diameter) and function (blood oxygenation) simultaneously. The method was tested in rats (n = 115), divided in 3 models: brain tumors (9 L, C6, F98), permanent stroke, and a control group of healthy animals. We showed that fingerprinting can robustly distinguish between healthy and pathological brain tissues with different behaviors in tumor and stroke models. In particular, fingerprinting revealed that C6 and F98 glioma models have similar signatures while 9 L present a distinct evolution. We also showed that it is possible to improve the results of MRvF and obtain supplemental information by changing the numerical representation of the vascular network. Finally, good agreement was found between MRvF and conventional MR approaches in healthy tissues and in the C6, F98, and permanent stroke models. For the 9 L glioma model, fingerprinting showed blood oxygenation measurements that contradict results obtained with a quantitative BOLD approach. In conclusion, MR vascular fingerprinting seems to be an efficient technique to study microvascular properties in vivo. Multiple technical improvements are feasible and might improve diagnosis and management of brain diseases.
MR Vascular Fingerprinting in Stroke and Brain Tumors Models
Lemasson, B.; Pannetier, N.; Coquery, N.; Boisserand, Ligia S. B.; Collomb, Nora; Schuff, N.; Moseley, M.; Zaharchuk, G.; Barbier, E. L.; Christen, T.
2016-01-01
In this study, we evaluated an MRI fingerprinting approach (MRvF) designed to provide high-resolution parametric maps of the microvascular architecture (i.e., blood volume fraction, vessel diameter) and function (blood oxygenation) simultaneously. The method was tested in rats (n = 115), divided in 3 models: brain tumors (9 L, C6, F98), permanent stroke, and a control group of healthy animals. We showed that fingerprinting can robustly distinguish between healthy and pathological brain tissues with different behaviors in tumor and stroke models. In particular, fingerprinting revealed that C6 and F98 glioma models have similar signatures while 9 L present a distinct evolution. We also showed that it is possible to improve the results of MRvF and obtain supplemental information by changing the numerical representation of the vascular network. Finally, good agreement was found between MRvF and conventional MR approaches in healthy tissues and in the C6, F98, and permanent stroke models. For the 9 L glioma model, fingerprinting showed blood oxygenation measurements that contradict results obtained with a quantitative BOLD approach. In conclusion, MR vascular fingerprinting seems to be an efficient technique to study microvascular properties in vivo. Multiple technical improvements are feasible and might improve diagnosis and management of brain diseases. PMID:27883015
Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆
Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-01-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493
Dykstra, Andrew R.; Halgren, Eric; Thesen, Thomas; Carlson, Chad E.; Doyle, Werner; Madsen, Joseph R.; Eskandar, Emad N.; Cash, Sydney S.
2011-01-01
The auditory system must constantly decompose the complex mixture of sound arriving at the ear into perceptually independent streams constituting accurate representations of individual sources in the acoustic environment. How the brain accomplishes this task is not well understood. The present study combined a classic behavioral paradigm with direct cortical recordings from neurosurgical patients with epilepsy in order to further describe the neural correlates of auditory streaming. Participants listened to sequences of pure tones alternating in frequency and indicated whether they heard one or two “streams.” The intracranial EEG was simultaneously recorded from sub-dural electrodes placed over temporal, frontal, and parietal cortex. Like healthy subjects, patients heard one stream when the frequency separation between tones was small and two when it was large. Robust evoked-potential correlates of frequency separation were observed over widespread brain areas. Waveform morphology was highly variable across individual electrode sites both within and across gross brain regions. Surprisingly, few evoked-potential correlates of perceptual organization were observed after controlling for physical stimulus differences. The results indicate that the cortical areas engaged during the streaming task are more complex and widespread than has been demonstrated by previous work, and that, by-and-large, correlates of bistability during streaming are probably located on a spatial scale not assessed – or in a brain area not examined – by the present study. PMID:21886615
Electrocorticographic representations of segmental features in continuous speech
Lotte, Fabien; Brumberg, Jonathan S.; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Guan, Cuntai; Schalk, Gerwin
2015-01-01
Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates. PMID:25759647
Oscillatory Activity in the Infant Brain and the Representation of Small Numbers
Leung, Sumie; Mareschal, Denis; Rowsell, Renee; Simpson, David; Iaria, Leon; Grbic, Amanda; Kaufman, Jordy
2016-01-01
Gamma-band oscillatory activity (GBA) is an established neural signature of sustained occluded object representation in infants and adults. However, it is not yet known whether the magnitude of GBA in the infant brain reflects the quantity of occluded items held in memory. To examine this, we compared GBA of 6–8 month-old infants during occlusion periods after the representation of two objects vs. that of one object. We found that maintaining a representation of two objects during occlusion resulted in significantly greater GBA relative to maintaining a single object. Further, this enhancement was located in the right occipital region, which is consistent with previous object representation research in adults and infants. We conclude that enhanced GBA reflects neural processes underlying infants’ representation of small numbers. PMID:26903821
Oscillatory Activity in the Infant Brain and the Representation of Small Numbers.
Leung, Sumie; Mareschal, Denis; Rowsell, Renee; Simpson, David; Iaria, Leon; Grbic, Amanda; Kaufman, Jordy
2016-01-01
Gamma-band oscillatory activity (GBA) is an established neural signature of sustained occluded object representation in infants and adults. However, it is not yet known whether the magnitude of GBA in the infant brain reflects the quantity of occluded items held in memory. To examine this, we compared GBA of 6-8 month-old infants during occlusion periods after the representation of two objects vs. that of one object. We found that maintaining a representation of two objects during occlusion resulted in significantly greater GBA relative to maintaining a single object. Further, this enhancement was located in the right occipital region, which is consistent with previous object representation research in adults and infants. We conclude that enhanced GBA reflects neural processes underlying infants' representation of small numbers.
Visual brain activity patterns classification with simultaneous EEG-fMRI: A multimodal approach.
Ahmad, Rana Fayyaz; Malik, Aamir Saeed; Kamel, Nidal; Reza, Faruque; Amin, Hafeez Ullah; Hussain, Muhammad
2017-01-01
Classification of the visual information from the brain activity data is a challenging task. Many studies reported in the literature are based on the brain activity patterns using either fMRI or EEG/MEG only. EEG and fMRI considered as two complementary neuroimaging modalities in terms of their temporal and spatial resolution to map the brain activity. For getting a high spatial and temporal resolution of the brain at the same time, simultaneous EEG-fMRI seems to be fruitful. In this article, we propose a new method based on simultaneous EEG-fMRI data and machine learning approach to classify the visual brain activity patterns. We acquired EEG-fMRI data simultaneously on the ten healthy human participants by showing them visual stimuli. Data fusion approach is used to merge EEG and fMRI data. Machine learning classifier is used for the classification purposes. Results showed that superior classification performance has been achieved with simultaneous EEG-fMRI data as compared to the EEG and fMRI data standalone. This shows that multimodal approach improved the classification accuracy results as compared with other approaches reported in the literature. The proposed simultaneous EEG-fMRI approach for classifying the brain activity patterns can be helpful to predict or fully decode the brain activity patterns.
Degradation of stored movement representations in the Parkinsonian brain and the impact of levodopa.
D'Andrea, Jolyn N A; Haffenden, Angela M; Furtado, Sarah; Suchowersky, Oksana; Goodyear, Bradley G
2013-06-01
Parkinson's disease (PD) results from the depletion of dopamine and other neurotransmitters within the basal ganglia, and is typically characterized by motor impairment (e.g., bradykinesia) and difficulty initiating voluntary movements. Difficulty initiating a movement may result from a deficit in accessing or executing a stored representation of the movement, or having to create a new representation each time a movement is required. To date, it is unclear which may be responsible for movement initiation impairments observed in PD. In this study, we used functional magnetic resonance imaging and a task in which participants passively viewed familiar and unfamiliar graspable objects, with no confounding motor task component. Our results show that the brains of PD patients implicitly analyze familiar graspable objects as if the brain has little or no motor experience with the objects. This was observed as a lack of differential activity within brain regions associated with stored movement representations for familiar objects relative to unfamiliar objects, as well as significantly greater activity for familiar objects when off levodopa relative to on medication. Symptom severity modulated this activity difference within the basal ganglia. Levodopa appears to normalize brain activity, but its effect may be one of attenuation of brain hyperactivity within the basal ganglia network, which is responsible for controlling motor behavior and the integration of visuomotor information. Overall, this study demonstrates that difficulty initiating voluntary movements experienced by PD patients may be the result of degradation in stored representations responsible for the movement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
An EEG Finger-Print of fMRI deep regional activation.
Meir-Hasson, Yehudit; Kinreich, Sivan; Podlipsky, Ilana; Hendler, Talma; Intrator, Nathan
2014-11-15
This work introduces a general framework for producing an EEG Finger-Print (EFP) which can be used to predict specific brain activity as measured by fMRI at a given deep region. This new approach allows for improved EEG spatial resolution based on simultaneous fMRI activity measurements. Advanced signal processing and machine learning methods were applied on EEG data acquired simultaneously with fMRI during relaxation training guided by on-line continuous feedback on changing alpha/theta EEG measure. We focused on demonstrating improved EEG prediction of activation in sub-cortical regions such as the amygdala. Our analysis shows that a ridge regression model that is based on time/frequency representation of EEG data from a single electrode, can predict the amygdala related activity significantly better than a traditional theta/alpha activity sampled from the best electrode and about 1/3 of the times, significantly better than a linear combination of frequencies with a pre-defined delay. The far-reaching goal of our approach is to be able to reduce the need for fMRI scanning for probing specific sub-cortical regions such as the amygdala as the basis for brain-training procedures. On the other hand, activity in those regions can be characterized with higher temporal resolution than is obtained by fMRI alone thus revealing additional information about their processing mode. Copyright © 2013 Elsevier Inc. All rights reserved.
Huet, Magali; Dany, Lionel; Apostolidis, Thémistoklis
2016-04-01
The aim of our research is to highlight the role of social representations of the traumatic brain-injured person in the adjustments made by caregivers in building and maintaining quality of care. Twenty-three semi-structured interviews were conducted with nursing assistants and medico-psychological assistants, working in a long-term care facility. The interviews were the subject of a thematic content analysis. The analysis shows the role of representations of the traumatic brain-injured person in the way caregivers explain behaviours and situations and in the orientation of their professional practices. In explaining the inexplicable, caregivers establish a more human relationship through individualized care.
Flexible Coding of Visual Working Memory Representations during Distraction.
Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark
2018-06-06
Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.
Koster-Hale, Jorie; Bedny, Marina; Saxe, Rebecca
2014-01-01
Blind people's inferences about how other people see provide a window into fundamental questions about the human capacity to think about one another's thoughts. By working with blind individuals, we can ask both what kinds of representations people form about others’ minds, and how much these representations depend on the observer having had similar mental states themselves. Thinking about others’ mental states depends on a specific group of brain regions, including the right temporo-parietal junction (RTPJ). We investigated the representations of others’ mental states in these brain regions, using multivoxel pattern analyses (MVPA). We found that, first, in the RTPJ of sighted adults, the pattern of neural response distinguished the source of the mental state (did the protagonist see or hear something?) but not the valence (did the protagonist feel good or bad?). Second, these neural representations were preserved in congenitally blind adults. These results suggest that the temporo-parietal junction contains explicit, abstract representations of features of others’ mental states, including the perceptual source. The persistence of these representations in congenitally blind adults, who have no first-person experience with sight, provides evidence that these representations emerge even in the absence of first-person perceptual experiences. PMID:24960530
Koster-Hale, Jorie; Bedny, Marina; Saxe, Rebecca
2014-10-01
Blind people's inferences about how other people see provide a window into fundamental questions about the human capacity to think about one another's thoughts. By working with blind individuals, we can ask both what kinds of representations people form about others' minds, and how much these representations depend on the observer having had similar mental states themselves. Thinking about others' mental states depends on a specific group of brain regions, including the right temporo-parietal junction (RTPJ). We investigated the representations of others' mental states in these brain regions, using multivoxel pattern analyses (MVPA). We found that, first, in the RTPJ of sighted adults, the pattern of neural response distinguished the source of the mental state (did the protagonist see or hear something?) but not the valence (did the protagonist feel good or bad?). Second, these neural representations were preserved in congenitally blind adults. These results suggest that the temporo-parietal junction contains explicit, abstract representations of features of others' mental states, including the perceptual source. The persistence of these representations in congenitally blind adults, who have no first-person experience with sight, provides evidence that these representations emerge even in the absence of relevant first-person perceptual experiences. Copyright © 2014 Elsevier B.V. All rights reserved.
Heiser, Laura M; Berman, Rebecca A; Saunders, Richard C; Colby, Carol L
2005-11-01
With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.
Inter-area correlations in the ventral visual pathway reflect feature integration
Freeman, Jeremy; Donner, Tobias H.; Heeger, David J.
2011-01-01
During object perception, the brain integrates simple features into representations of complex objects. A perceptual phenomenon known as visual crowding selectively interferes with this process. Here, we use crowding to characterize a neural correlate of feature integration. Cortical activity was measured with functional magnetic resonance imaging, simultaneously in multiple areas of the ventral visual pathway (V1–V4 and the visual word form area, VWFA, which responds preferentially to familiar letters), while human subjects viewed crowded and uncrowded letters. Temporal correlations between cortical areas were lower for crowded letters than for uncrowded letters, especially between V1 and VWFA. These differences in correlation were retinotopically specific, and persisted when attention was diverted from the letters. But correlation differences were not evident when we substituted the letters with grating patches that were not crowded under our stimulus conditions. We conclude that inter-area correlations reflect feature integration and are disrupted by crowding. We propose that crowding may perturb the transformations between neural representations along the ventral pathway that underlie the integration of features into objects. PMID:21521832
Basic level category structure emerges gradually across human ventral visual cortex.
Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li
2015-07-01
Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.
Task alters category representations in prefrontal but not high-level visual cortex.
Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit
2017-07-15
A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Goodwin, Amanda P.; Gilbert, Jennifer K.; Cho, Sun-Joo; Kearns, Devin M.
2014-01-01
The current study models reader, item, and word contributions to the lexical representations of 39 morphologically complex words for 172 middle school students using a crossed random-effects item response model with multiple outcomes. We report 3 findings. First, results suggest that lexical representations can be characterized by separate but…
Sparse representation of whole-brain fMRI signals for identification of functional networks.
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming
2015-02-01
There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.
Relating brain signal variability to knowledge representation.
Heisz, Jennifer J; Shedden, Judith M; McIntosh, Anthony R
2012-11-15
We assessed the hypothesis that brain signal variability is a reflection of functional network reconfiguration during memory processing. In the present experiments, we use multiscale entropy to capture the variability of human electroencephalogram (EEG) while manipulating the knowledge representation associated with faces stored in memory. Across two experiments, we observed increased variability as a function of greater knowledge representation. In Experiment 1, individuals with greater familiarity for a group of famous faces displayed more brain signal variability. In Experiment 2, brain signal variability increased with learning after multiple experimental exposures to previously unfamiliar faces. The results demonstrate that variability increases with face familiarity; cognitive processes during the perception of familiar stimuli may engage a broader network of regions, which manifests as higher complexity/variability in spatial and temporal domains. In addition, effects of repetition suppression on brain signal variability were observed, and the pattern of results is consistent with a selectivity model of neural adaptation. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
A Neural Mechanism of Social Categorization.
Stolier, Ryan M; Freeman, Jonathan B
2017-06-07
Humans readily sort one another into multiple social categories from mere facial features. However, the facial features used to do so are not always clear-cut because they can be associated with opponent categories (e.g., feminine male face). Recently, computational models and behavioral studies have provided indirect evidence that categorizing such faces is accomplished through dynamic competition between parallel, coactivated social categories that resolve into a stable categorical percept. Using a novel paradigm combining fMRI with real-time hand tracking, the present study examined how the brain translates diverse social cues into categorical percepts. Participants (male and female) categorized faces varying in gender and racial typicality. When categorizing atypical faces, participants' hand movements were simultaneously attracted toward the unselected category response, indexing the degree to which such faces activated the opposite category in parallel. Multivoxel pattern analyses (MVPAs) provided evidence that such social category coactivation manifested in neural patterns of the right fusiform cortex. The extent to which the hand was simultaneously attracted to the opposite gender or race category response option corresponded to increased neural pattern similarity with the average pattern associated with that category, which in turn associated with stronger engagement of the dorsal anterior cingulate cortex. The findings point to a model of social categorization in which occasionally conflicting facial features are resolved through competition between coactivated ventral-temporal cortical representations with the assistance of conflict-monitoring regions. More broadly, the results offer a promising multimodal paradigm to investigate the neural basis of "hidden", temporarily active representations in the service of a broad range of cognitive processes. SIGNIFICANCE STATEMENT Individuals readily sort one another into social categories (e.g., sex, race), which have important consequences for a variety of interpersonal behaviors. However, individuals routinely encounter faces that contain diverse features associated with multiple categories (e.g., feminine male face). Using a novel paradigm combining neuroimaging with hand tracking, the present research sought to address how the brain comes to arrive at stable social categorizations from multiple social cues. The results provide evidence that opponent social categories coactivate in face-processing regions, which compete and may resolve into an eventual stable categorization with the assistance of conflict-monitoring regions. Therefore, the findings provide a neural mechanism through which the brain may translate inherently diverse social cues into coherent categorizations of other people. Copyright © 2017 the authors 0270-6474/17/375711-11$15.00/0.
Zhang, Li; Xin, Ziqiang; Feng, Tingyong; Chen, Yinghe; Szűcs, Denes
2018-03-01
Recent studies have highlighted the fact that some tasks used to study symbolic number representations are confounded by judgments about physical similarity. Here, we investigated whether the contribution of physical similarity and numerical representation differed in the often-used symbolic same-different, numerical comparison, physical comparison, and priming tasks. Experiment 1 showed that subjective physical similarity was the best predictor of participants' performance in the same-different task, regardless of simultaneous or sequential presentation. Furthermore, the contribution of subjective physical similarity was larger in a simultaneous presentation than in a sequential presentation. Experiment 2 showed that only numerical representation was involved in numerical comparison. Experiment 3 showed that both subjective physical similarity and numerical representation contributed to participants' physical comparison performance. Finally, only numerical representation contributed to participants' performance in a priming task as revealed by Experiment 4. Taken together, the contribution of physical similarity and numerical representation depends on task demands. Performance primarily seems to rely on numerical properties in tasks that require explicit quantitative comparison judgments (physical or numerical), while physical stimulus properties exert an effect in the same-different task.
Tuning the developing brain to social signals of emotions
Leppänen, Jukka M.; Nelson, Charles A.
2010-01-01
PREFACE Humans in diverse cultures develop a similar capacity to recognize the emotional signals of different facial expressions. This capacity is mediated by a brain network that involves emotion-related brain circuits and higher-level visual representation areas. Recent studies suggest that the key components of this network begin to emerge early in life. The studies also suggest that initial biases in emotion-related brain circuits and the early coupling of these circuits and cortical perceptual areas provides a foundation for a rapid acquisition of representations of those facial features that denote specific emotions. PMID:19050711
Simultaneous fast measurement of circuit dynamics at multiple sites across the mammalian brain
Kim, Christina K; Yang, Samuel J; Pichamoorthy, Nandini; Young, Noah P; Kauvar, Isaac; Jennings, Joshua H; Lerner, Talia N; Berndt, Andre; Lee, Soo Yeun; Ramakrishnan, Charu; Davidson, Thomas J; Inoue, Masatoshi; Bito, Haruhiko; Deisseroth, Karl
2017-01-01
Real-time activity measurements from multiple specific cell populations and projections are likely to be important for understanding the brain as a dynamical system. Here we developed frame-projected independent-fiber photometry (FIP), which we used to record fluorescence activity signals from many brain regions simultaneously in freely behaving mice. We explored the versatility of the FIP microscope by quantifying real-time activity relationships among many brain regions during social behavior, simultaneously recording activity along multiple axonal pathways during sensory experience, performing simultaneous two-color activity recording, and applying optical perturbation tuned to elicit dynamics that match naturally occurring patterns observed during behavior. PMID:26878381
Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.
Desantis, Andrea; Haggard, Patrick
2016-08-01
To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Marshall, Peter J.; Meltzoff, Andrew N.
2015-01-01
Researchers have examined representations of the body in the adult brain, but relatively little attention has been paid to ontogenetic aspects of neural body maps in human infants. Novel applications of methods for recording brain activity in infants are delineating cortical body maps in the first months of life. Body maps may facilitate infants’ registration of similarities between self and other—an ability that is foundational to developing social cognition. Alterations in interpersonal aspects of body representations might also contribute to social deficits in certain neurodevelopmental disorders. PMID:26231760
General methodology for simultaneous representation and discrimination of multiple object classes
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.
Orlov, Tanya; Zohary, Ehud
2018-01-17
We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on the retina. The lateral occipital complex (LOC) represents shape faithfully in such conditions even if the object is partially occluded. However, shape must sometimes be reconstructed over both space and time. Such is the case in anorthoscopic perception, when an object is moving behind a narrow slit. In this scenario, spatial information is limited at any moment so the whole-shape percept can only be inferred by integration of successive shape views over time. We find that LOC carries shape-specific information recovered using such temporal integration processes. The shape representation is invariant to slit orientation and is similar to that evoked by a fully viewed image. Existing models of object recognition lack such capabilities. Copyright © 2018 the authors 0270-6474/18/380659-20$15.00/0.
'Where' and 'what' in the whisker sensorimotor system.
Diamond, Mathew E; von Heimendahl, Moritz; Knutsen, Per Magne; Kleinfeld, David; Ahissar, Ehud
2008-08-01
In the visual system of primates, different neuronal pathways are specialized for processing information about the spatial coordinates of objects and their identity - that is, 'where' and 'what'. By contrast, rats and other nocturnal animals build up a neuronal representation of 'where' and 'what' by seeking out and palpating objects with their whiskers. We present recent evidence about how the brain constructs a representation of the surrounding world through whisker-mediated sense of touch. While considerable knowledge exists about the representation of the physical properties of stimuli - like texture, shape and position - we know little about how the brain represents their meaning. Future research may elucidate this and show how the transformation of one representation to another is achieved.
Simultaneous real-time monitoring of multiple cortical systems.
Gupta, Disha; Jeremy Hill, N; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L; Schalk, Gerwin
2014-10-01
Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main Results: Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time. This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic.
Simultaneous Real-Time Monitoring of Multiple Cortical Systems
Gupta, Disha; Hill, N. Jeremy; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Schalk, Gerwin
2014-01-01
Objective Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor, or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. Approach We study these questions using electrocorticographic (ECoG) signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (6 for offline parameter optimization, 6 for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main results Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelope. These decoders were trained separately and executed simultaneously in real time. Significance This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our comparison of univariate and multivariate decoding strategies, and our analysis of the influence of their decoding parameters, provides benchmarks and guidelines for future research on this topic. PMID:25080161
Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization
Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos
2015-01-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684
On characterizing population commonalities and subject variations in brain networks.
Ghanbari, Yasser; Bloy, Luke; Tunc, Birkan; Shankar, Varsha; Roberts, Timothy P L; Edgar, J Christopher; Schultz, Robert T; Verma, Ragini
2017-05-01
Brain networks based on resting state connectivity as well as inter-regional anatomical pathways obtained using diffusion imaging have provided insight into pathology and development. Such work has underscored the need for methods that can extract sub-networks that can accurately capture the connectivity patterns of the underlying population while simultaneously describing the variation of sub-networks at the subject level. We have designed a multi-layer graph clustering method that extracts clusters of nodes, called 'network hubs', which display higher levels of connectivity within the cluster than to the rest of the brain. The method determines an atlas of network hubs that describes the population, as well as weights that characterize subject-wise variation in terms of within- and between-hub connectivity. This lowers the dimensionality of brain networks, thereby providing a representation amenable to statistical analyses. The applicability of the proposed technique is demonstrated by extracting an atlas of network hubs for a population of typically developing controls (TDCs) as well as children with autism spectrum disorder (ASD), and using the structural and functional networks of a population to determine the subject-level variation of these hubs and their inter-connectivity. These hubs are then used to compare ASD and TDCs. Our method is generalizable to any population whose connectivity (structural or functional) can be captured via non-negative network graphs. Copyright © 2015 Elsevier B.V. All rights reserved.
Connectivity in the human brain dissociates entropy and complexity of auditory inputs.
Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri
2015-03-01
Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.
Homological scaffolds of brain functional networks
Petri, G.; Expert, P.; Turkheimer, F.; Carhart-Harris, R.; Nutt, D.; Hellyer, P. J.; Vaccarino, F.
2014-01-01
Networks, as efficient representations of complex systems, have appealed to scientists for a long time and now permeate many areas of science, including neuroimaging (Bullmore and Sporns 2009 Nat. Rev. Neurosci. 10, 186–198. (doi:10.1038/nrn2618)). Traditionally, the structure of complex networks has been studied through their statistical properties and metrics concerned with node and link properties, e.g. degree-distribution, node centrality and modularity. Here, we study the characteristics of functional brain networks at the mesoscopic level from a novel perspective that highlights the role of inhomogeneities in the fabric of functional connections. This can be done by focusing on the features of a set of topological objects—homological cycles—associated with the weighted functional network. We leverage the detected topological information to define the homological scaffolds, a new set of objects designed to represent compactly the homological features of the correlation network and simultaneously make their homological properties amenable to networks theoretical methods. As a proof of principle, we apply these tools to compare resting-state functional brain activity in 15 healthy volunteers after intravenous infusion of placebo and psilocybin—the main psychoactive component of magic mushrooms. The results show that the homological structure of the brain's functional patterns undergoes a dramatic change post-psilocybin, characterized by the appearance of many transient structures of low stability and of a small number of persistent ones that are not observed in the case of placebo. PMID:25401177
Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data
Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924
Cognitive architecture of perceptual organization: from neurons to gnosons.
van der Helm, Peter A
2012-02-01
What, if anything, is cognitive architecture and how is it implemented in neural architecture? Focusing on perceptual organization, this question is addressed by way of a pluralist approach which, supported by metatheoretical considerations, combines complementary insights from representational, connectionist, and dynamic systems approaches to cognition. This pluralist approach starts from a representationally inspired model which implements the intertwined but functionally distinguishable subprocesses of feedforward feature encoding, horizontal feature binding, and recurrent feature selection. As sustained by a review of neuroscientific evidence, these are the subprocesses that are believed to take place in the visual hierarchy in the brain. Furthermore, the model employs a special form of processing, called transparallel processing, whose neural signature is proposed to be gamma-band synchronization in transient horizontal neural assemblies. In neuroscience, such assemblies are believed to mediate binding of similar features. Their formal counterparts in the model are special input-dependent distributed representations, called hyperstrings, which allow many similar features to be processed in a transparallel fashion, that is, simultaneously as if only one feature were concerned. This form of processing does justice to both the high combinatorial capacity and the high speed of the perceptual organization process. A naturally following proposal is that those temporarily synchronized neural assemblies are "gnosons", that is, constituents of flexible self-organizing cognitive architecture in between the relatively rigid level of neurons and the still elusive level of consciousness.
Where are my hands? Influence of limb posture on tactile extinction.
Auclair, Laurent; Barra, Julien; Raibaut, Patrick
2012-05-01
Tactile localization on the skin involves both a somatotopic and a postural schema (body-schema) representation. The present study determines the extent to which body posture influences tactile perception in right-brain-damaged patients. In a first set of experiments, patients were asked to detect single tactile stimulation delivered to their left or right hands or to both hands simultaneously (double stimulation) in different arm postures. Only patients who had no difficulty localizing single and double tactile stimulations when their hands were placed in anatomic position were tested. Participant's hands were crossed, one over the other, and the tactile stimuli were delivered either to the hand (beyond the crossing point, Experiment 1) or to the forearm (before the crossing point, Experiment 2). In Experiment 3, the left hand was placed in the right hemispace and the right hand in the left hemispace without crossing over (opposite condition). In a second set of experiments, patients were asked to detect stimulation delivered to the forefinger. The fingers were crossed, one over the other at the level of the middle phalanx, and stimuli were delivered either beyond or before the crossing point. In all experimental conditions, control participants performed at ceiling. We observed a left-hand tactile extinction on double stimulation in the crossed condition. These results suggest that tactile stimuli can be encoded based on multiple specific body-part representations rather than on an integrated body-schema representation.
Gratton, Gabriele
2018-03-01
Here, I propose a view of the architecture of the human information processing system, and of how it can be adapted to changing task demands (which is the hallmark of cognitive control). This view is informed by an interpretation of brain activity as reflecting the excitability level of neural representations, encoding not only stimuli and temporal contexts, but also action plans and task goals. The proposed cognitive architecture includes three types of circuits: open circuits, involved in feed-forward processing such as that connecting stimuli with responses and characterized by brief, transient brain activity; and two types of closed circuits, positive feedback circuits (characterized by sustained, high-frequency oscillatory activity), which help select and maintain representations, and negative feedback circuits (characterized by brief, low-frequency oscillatory bursts), which are instead associated with changes in representations. Feed-forward activity is primarily responsible for the spread of activation along the information processing system. Oscillatory activity, instead, controls this spread. Sustained oscillatory activity due to both local cortical circuits (gamma) and longer corticothalamic circuits (alpha and beta) allows for the selection of individuated representations. Through the interaction of these circuits, it also allows for the preservation of representations across different temporal spans (sensory and working memory) and their spread across the brain. In contrast, brief bursts of oscillatory activity, generated by novel and/or conflicting information, lead to the interruption of sustained oscillatory activity and promote the generation of new representations. I discuss how this framework can account for a number of psychological and behavioral phenomena. © 2017 Society for Psychophysiological Research.
Schmidt, Timo Torsten; Blankenburg, Felix
2018-05-31
Working memory (WM) studies have been essential for ascertaining how the brain flexibly handles mentally represented information in the absence of sensory stimulation. Most studies on the memory of sensory stimulus features have focused, however, on the visual domain. Here, we report a human WM study in the tactile modality where participants had to memorize the spatial layout of patterned Braille-like stimuli presented to the index finger. We used a whole-brain searchlight approach in combination with multi-voxel pattern analysis (MVPA) to investigate tactile WM representations without a priori assumptions about which brain regions code tactospatial information. Our analysis revealed that posterior and parietal cortices, as well as premotor regions, retained information across the twelve-second delay phase. Interestingly, parts of this brain network were previously shown to also contain information of visuospatial WM. Also, by specifically testing somatosensory regions for WM representations, we observed content-specific activation patterns in primary somatosensory cortex (SI). Our findings demonstrate that tactile WM depends on a distributed network of brain regions in analogy to the representation of visuospatial information. Copyright © 2018. Published by Elsevier Inc.
Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-01-01
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information. PMID:29513219
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-03-07
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.
2018-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A
2017-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.
Spinal cord injury affects the interplay between visual and sensorimotor representations of the body
Ionta, Silvio; Villiger, Michael; Jutzeler, Catherine R; Freund, Patrick; Curt, Armin; Gassert, Roger
2016-01-01
The brain integrates multiple sensory inputs, including somatosensory and visual inputs, to produce a representation of the body. Spinal cord injury (SCI) interrupts the communication between brain and body and the effects of this deafferentation on body representation are poorly understood. We investigated whether the relative weight of somatosensory and visual frames of reference for body representation is altered in individuals with incomplete or complete SCI (affecting lower limbs’ somatosensation), with respect to controls. To study the influence of afferent somatosensory information on body representation, participants verbally judged the laterality of rotated images of feet, hands, and whole-bodies (mental rotation task) in two different postures (participants’ body parts were hidden from view). We found that (i) complete SCI disrupts the influence of postural changes on the representation of the deafferented body parts (feet, but not hands) and (ii) regardless of posture, whole-body representation progressively deteriorates proportionally to SCI completeness. These results demonstrate that the cortical representation of the body is dynamic, responsive, and adaptable to contingent conditions, in that the role of somatosensation is altered and partially compensated with a change in the relative weight of somatosensory versus visual bodily representations. PMID:26842303
Keller, Peter E.; Novembre, Giacomo; Hove, Michael J.
2014-01-01
Human interaction often requires simultaneous precision and flexibility in the coordination of rhythmic behaviour between individuals engaged in joint activity, for example, playing a musical duet or dancing with a partner. This review article addresses the psychological processes and brain mechanisms that enable such rhythmic interpersonal coordination. First, an overview is given of research on the cognitive-motor processes that enable individuals to represent joint action goals and to anticipate, attend and adapt to other's actions in real time. Second, the neurophysiological mechanisms that underpin rhythmic interpersonal coordination are sought in studies of sensorimotor and cognitive processes that play a role in the representation and integration of self- and other-related actions within and between individuals' brains. Finally, relationships between social–psychological factors and rhythmic interpersonal coordination are considered from two perspectives, one concerning how social-cognitive tendencies (e.g. empathy) affect coordination, and the other concerning how coordination affects interpersonal affiliation, trust and prosocial behaviour. Our review highlights musical ensemble performance as an ecologically valid yet readily controlled domain for investigating rhythm in joint action. PMID:25385772
Differential Encoding of Time by Prefrontal and Striatal Network Dynamics.
Bakhurin, Konstantin I; Goudar, Vishwa; Shobe, Justin L; Claar, Leslie D; Buonomano, Dean V; Masmanidis, Sotiris C
2017-01-25
Telling time is fundamental to many forms of learning and behavior, including the anticipation of rewarding events. Although the neural mechanisms underlying timing remain unknown, computational models have proposed that the brain represents time in the dynamics of neural networks. Consistent with this hypothesis, changing patterns of neural activity dynamically in a number of brain areas-including the striatum and cortex-has been shown to encode elapsed time. To date, however, no studies have explicitly quantified and contrasted how well different areas encode time by recording large numbers of units simultaneously from more than one area. Here, we performed large-scale extracellular recordings in the striatum and orbitofrontal cortex of mice that learned the temporal relationship between a stimulus and a reward and reported their response with anticipatory licking. We used a machine-learning algorithm to quantify how well populations of neurons encoded elapsed time from stimulus onset. Both the striatal and cortical networks encoded time, but the striatal network outperformed the orbitofrontal cortex, a finding replicated both in simultaneously and nonsimultaneously recorded corticostriatal datasets. The striatal network was also more reliable in predicting when the animals would lick up to ∼1 s before the actual lick occurred. Our results are consistent with the hypothesis that temporal information is encoded in a widely distributed manner throughout multiple brain areas, but that the striatum may have a privileged role in timing because it has a more accurate "clock" as it integrates information across multiple cortical areas. The neural representation of time is thought to be distributed across multiple functionally specialized brain structures, including the striatum and cortex. However, until now, the neural code for time has not been compared quantitatively between these areas. Here, we performed large-scale recordings in the striatum and orbitofrontal cortex of mice trained on a stimulus-reward association task involving a delay period and used a machine-learning algorithm to quantify how well populations of simultaneously recorded neurons encoded elapsed time from stimulus onset. We found that, although both areas encoded time, the striatum consistently outperformed the orbitofrontal cortex. These results suggest that the striatum may refine the code for time by integrating information from multiple inputs. Copyright © 2017 the authors 0270-6474/17/370854-17$15.00/0.
Invisible Brain: Knowledge in Research Works and Neuron Activity.
Segev, Aviv; Curtis, Dorothy; Jung, Sukhwan; Chae, Suhyun
2016-01-01
If the market has an invisible hand, does knowledge creation and representation have an "invisible brain"? While knowledge is viewed as a product of neuron activity in the brain, can we identify knowledge that is outside the brain but reflects the activity of neurons in the brain? This work suggests that the patterns of neuron activity in the brain can be seen in the representation of knowledge-related activity. Here we show that the neuron activity mechanism seems to represent much of the knowledge learned in the past decades based on published articles, in what can be viewed as an "invisible brain" or collective hidden neural networks. Similar results appear when analyzing knowledge activity in patents. Our work also tries to characterize knowledge increase as neuron network activity growth. The results propose that knowledge-related activity can be seen outside of the neuron activity mechanism. Consequently, knowledge might exist as an independent mechanism.
Cohen-Adad, Julien; Marchand-Pauvert, Veronique; Benali, Habib; Doyon, Julien
2015-01-01
The spinal cord participates in the execution of skilled movements by translating high-level cerebral motor representations into musculotopic commands. Yet, the extent to which motor skill acquisition relies on intrinsic spinal cord processes remains unknown. To date, attempts to address this question were limited by difficulties in separating spinal local effects from supraspinal influences through traditional electrophysiological and neuroimaging methods. Here, for the first time, we provide evidence for local learning-induced plasticity in intact human spinal cord through simultaneous functional magnetic resonance imaging of the brain and spinal cord during motor sequence learning. Specifically, we show learning-related modulation of activity in the C6–C8 spinal region, which is independent from that of related supraspinal sensorimotor structures. Moreover, a brain–spinal cord functional connectivity analysis demonstrates that the initial linear relationship between the spinal cord and sensorimotor cortex gradually fades away over the course of motor sequence learning, while the connectivity between spinal activity and cerebellum gains strength. These data suggest that the spinal cord not only constitutes an active functional component of the human motor learning network but also contributes distinctively from the brain to the learning process. The present findings open new avenues for rehabilitation of patients with spinal cord injuries, as they demonstrate that this part of the central nervous system is much more plastic than assumed before. Yet, the neurophysiological mechanisms underlying this intrinsic functional plasticity in the spinal cord warrant further investigations. PMID:26125597
A brain-machine interface enables bimanual arm movements in monkeys.
Ifft, Peter J; Shokur, Solaiman; Li, Zheng; Lebedev, Mikhail A; Nicolelis, Miguel A L
2013-11-06
Brain-machine interfaces (BMIs) are artificial systems that aim to restore sensation and movement to paralyzed patients. So far, BMIs have enabled only one arm to be moved at a time. Control of bimanual arm movements remains a major challenge. We have developed and tested a bimanual BMI that enables rhesus monkeys to control two avatar arms simultaneously. The bimanual BMI was based on the extracellular activity of 374 to 497 neurons recorded from several frontal and parietal cortical areas of both cerebral hemispheres. Cortical activity was transformed into movements of the two arms with a decoding algorithm called a fifth-order unscented Kalman filter (UKF). The UKF was trained either during a manual task performed with two joysticks or by having the monkeys passively observe the movements of avatar arms. Most cortical neurons changed their modulation patterns when both arms were engaged simultaneously. Representing the two arms jointly in a single UKF decoder resulted in improved decoding performance compared with using separate decoders for each arm. As the animals' performance in bimanual BMI control improved over time, we observed widespread plasticity in frontal and parietal cortical areas. Neuronal representation of the avatar and reach targets was enhanced with learning, whereas pairwise correlations between neurons initially increased and then decreased. These results suggest that cortical networks may assimilate the two avatar arms through BMI control. These findings should help in the design of more sophisticated BMIs capable of enabling bimanual motor control in human patients.
2016-05-01
large but correlated noise and signal interference (i.e., low -rank interference). Another contribution is the implementation of deep learning...representation, low rank, deep learning 52 Tung-Duong Tran-Luu 301-394-3082Unclassified Unclassified Unclassified UU ii Approved for public release; distribution...Classification of Acoustic Transients 6 3.2 Joint Sparse Representation with Low -Rank Interference 7 3.3 Simultaneous Group-and-Joint Sparse Representation
Doctor, Teacher, and Stethoscope: Neural Representation of Different Types of Semantic Relations.
Xu, Yangwen; Wang, Xiaosha; Wang, Xiaoying; Men, Weiwei; Gao, Jia-Hong; Bi, Yanchao
2018-03-28
Concepts can be related in many ways. They can belong to the same taxonomic category (e.g., "doctor" and "teacher," both in the category of people) or be associated with the same event context (e.g., "doctor" and "stethoscope," both associated with medical scenarios). How are these two major types of semantic relations coded in the brain? We constructed stimuli from three taxonomic categories (people, manmade objects, and locations) and three thematic categories (school, medicine, and sports) and investigated the neural representations of these two dimensions using representational similarity analyses in human participants (10 men and nine women). In specific regions of interest, the left anterior temporal lobe (ATL) and the left temporoparietal junction (TPJ), we found that, whereas both areas had significant effects of taxonomic information, the taxonomic relations had stronger effects in the ATL than in the TPJ ("doctor" and "teacher" closer in ATL neural activity), with the reverse being true for thematic relations ("doctor" and "stethoscope" closer in TPJ neural activity). A whole-brain searchlight analysis revealed that widely distributed regions, mainly in the left hemisphere, represented the taxonomic dimension. Interestingly, the significant effects of the thematic relations were only observed after the taxonomic differences were controlled for in the left TPJ, the right superior lateral occipital cortex, and other frontal, temporal, and parietal regions. In summary, taxonomic grouping is a primary organizational dimension across distributed brain regions, with thematic grouping further embedded within such taxonomic structures. SIGNIFICANCE STATEMENT How are concepts organized in the brain? It is well established that concepts belonging to the same taxonomic categories (e.g., "doctor" and "teacher") share neural representations in specific brain regions. How concepts are associated in other manners (e.g., "doctor" and "stethoscope," which are thematically related) remains poorly understood. We used representational similarity analyses to unravel the neural representations of these different types of semantic relations by testing the same set of words that could be differently grouped by taxonomic categories or by thematic categories. We found that widely distributed brain areas primarily represented taxonomic categories, with the thematic categories further embedded within the taxonomic structure. Copyright © 2018 the authors 0270-6474/18/383303-15$15.00/0.
Kumar, Manoj; Federmeier, Kara D; Fei-Fei, Li; Beck, Diane M
2017-07-15
A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and frontal cortices. This similarity of neural activity patterns across the two input types, for categories that co-activate local brain regions, provides strong evidence of a common semantic code for pictures and words in the brain. Copyright © 2017 Elsevier Inc. All rights reserved.
Tool-use: An open window into body representation and its plasticity
Martel, Marie; Cardinali, Lucilla; Roy, Alice C.; Farnè, Alessandro
2016-01-01
ABSTRACT Over the last decades, scientists have questioned the origin of the exquisite human mastery of tools. Seminal studies in monkeys, healthy participants and brain-damaged patients have primarily focused on the plastic changes that tool-use induces on spatial representations. More recently, we focused on the modifications tool-use must exert on the sensorimotor system and highlighted plastic changes at the level of the body representation used by the brain to control our movements, i.e., the Body Schema. Evidence is emerging for tool-use to affect also more visually and conceptually based representations of the body, such as the Body Image. Here we offer a critical review of the way different tool-use paradigms have been, and should be, used to try disentangling the critical features that are responsible for tool incorporation into different body representations. We will conclude that tool-use may offer a very valuable means to investigate high-order body representations and their plasticity. PMID:27315277
Tool-use: An open window into body representation and its plasticity.
Martel, Marie; Cardinali, Lucilla; Roy, Alice C; Farnè, Alessandro
2016-01-01
Over the last decades, scientists have questioned the origin of the exquisite human mastery of tools. Seminal studies in monkeys, healthy participants and brain-damaged patients have primarily focused on the plastic changes that tool-use induces on spatial representations. More recently, we focused on the modifications tool-use must exert on the sensorimotor system and highlighted plastic changes at the level of the body representation used by the brain to control our movements, i.e., the Body Schema. Evidence is emerging for tool-use to affect also more visually and conceptually based representations of the body, such as the Body Image. Here we offer a critical review of the way different tool-use paradigms have been, and should be, used to try disentangling the critical features that are responsible for tool incorporation into different body representations. We will conclude that tool-use may offer a very valuable means to investigate high-order body representations and their plasticity.
Finding imaging patterns of structural covariance via Non-Negative Matrix Factorization.
Sotiras, Aristeidis; Resnick, Susan M; Davatzikos, Christos
2015-03-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. Copyright © 2014 Elsevier Inc. All rights reserved.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
NASA Astrophysics Data System (ADS)
Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.
2018-04-01
In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.
Jiang, Xi; Zhang, Xin; Zhu, Dajiang
2014-10-01
Alzheimer's disease (AD) is the most common type of dementia (accounting for 60% to 80%) and is the fifth leading cause of death for those people who are 65 or older. By 2050, one new case of AD in United States is expected to develop every 33 sec. Unfortunately, there is no available effective treatment that can stop or slow the death of neurons that causes AD symptoms. On the other hand, it is widely believed that AD starts before development of the associated symptoms, so its prestages, including mild cognitive impairment (MCI) or even significant memory concern (SMC), have received increasing attention, not only because of their potential as a precursor of AD, but also as a possible predictor of conversion to other neurodegenerative diseases. Although these prestages have been defined clinically, accurate/efficient diagnosis is still challenging. Moreover, brain functional abnormalities behind those alterations and conversions are still unclear. In this article, by developing novel sparse representations of whole-brain resting-state functional magnetic resonance imaging signals and by using the most updated Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we successfully identified multiple functional components simultaneously, and which potentially represent those intrinsic functional networks involved in the resting-state activities. Interestingly, these identified functional components contain all the resting-state networks obtained from traditional independent-component analysis. Moreover, by using the features derived from those functional components, it yields high classification accuracy for both AD (94%) and MCI (92%) versus normal controls. Even for SMC we can still have 92% accuracy.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
A brain-based account of “basic-level” concepts
Bauer, Andrew James; Just, Marcel Adam
2017-01-01
This study provides a brain-based account of how object concepts at an intermediate (basic) level of specificity are represented, offering an enriched view of what it means for a concept to be a basic-level concept, a research topic pioneered by Rosch and others (Rosch et al., 1976). Applying machine learning techniques to fMRI data, it was possible to determine the semantic content encoded in the neural representations of object concepts at basic and subordinate levels of abstraction. The representation of basic-level concepts (e.g. bird) was spatially broad, encompassing sensorimotor brain areas that encode concrete object properties, and also language and heteromodal integrative areas that encode abstract semantic content. The representation of subordinate-level concepts (robin) was less widely distributed, concentrated in perceptual areas that underlie concrete content. Furthermore, basic-level concepts were representative of their subordinates in that they were neurally similar to their typical but not atypical subordinates (bird was neurally similar to robin but not woodpecker). The findings provide a brain-based account of the advantages that basic-level concepts enjoy in everyday life over subordinate-level concepts: the basic level is a broad topographical representation that encompasses both concrete and abstract semantic content, reflecting the multifaceted yet intuitive meaning of basic-level concepts. PMID:28826947
A brain-based account of "basic-level" concepts.
Bauer, Andrew James; Just, Marcel Adam
2017-11-01
This study provides a brain-based account of how object concepts at an intermediate (basic) level of specificity are represented, offering an enriched view of what it means for a concept to be a basic-level concept, a research topic pioneered by Rosch and others (Rosch et al., 1976). Applying machine learning techniques to fMRI data, it was possible to determine the semantic content encoded in the neural representations of object concepts at basic and subordinate levels of abstraction. The representation of basic-level concepts (e.g. bird) was spatially broad, encompassing sensorimotor brain areas that encode concrete object properties, and also language and heteromodal integrative areas that encode abstract semantic content. The representation of subordinate-level concepts (robin) was less widely distributed, concentrated in perceptual areas that underlie concrete content. Furthermore, basic-level concepts were representative of their subordinates in that they were neurally similar to their typical but not atypical subordinates (bird was neurally similar to robin but not woodpecker). The findings provide a brain-based account of the advantages that basic-level concepts enjoy in everyday life over subordinate-level concepts: the basic level is a broad topographical representation that encompasses both concrete and abstract semantic content, reflecting the multifaceted yet intuitive meaning of basic-level concepts. Copyright © 2017 Elsevier Inc. All rights reserved.
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
NASA Astrophysics Data System (ADS)
Priatna, Nanang
2017-08-01
The use of Information and Communication Technology (ICT) in mathematics instruction will help students in building conceptual understanding. One of the software products used in mathematics instruction is GeoGebra. The program enables simple visualization of complex geometric concepts and helps improve students' understanding of geometric concepts. Instruction applying brain-based learning principles is one oriented at the efforts of naturally empowering the brain potentials which enable students to build their own knowledge. One of the goals of mathematics instruction in school is to develop mathematical communication ability. Mathematical representation is regarded as a part of mathematical communication. It is a description, expression, symbolization, or modeling of mathematical ideas/concepts as an attempt of clarifying meanings or seeking for solutions to the problems encountered by students. The research aims to develop a learning model and teaching materials by applying the principles of brain-based learning aided by GeoGebra to improve junior high school students' mathematical representation ability. It adopted a quasi-experimental method with the non-randomized control group pretest-posttest design and the 2x3 factorial model. Based on analysis of the data, it is found that the increase in the mathematical representation ability of students who were treated with mathematics instruction applying the brain-based learning principles aided by GeoGebra was greater than the increase of the students given conventional instruction, both as a whole and based on the categories of students' initial mathematical ability.
ERIC Educational Resources Information Center
Hyde, Daniel C.; Spelke, Elizabeth S.
2009-01-01
Behavioral and brain imaging research indicates that human infants, humans adults, and many nonhuman animals represent large nonsymbolic numbers approximately, discriminating between sets with a ratio limit on accuracy. Some behavioral evidence, especially with human infants, suggests that these representations differ from representations of small…
Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503
Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.
Neural basis for dynamic updating of object representation in visual working memory.
Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun
2010-02-15
In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.
Unaware Processing of Tools in the Neural System for Object-Directed Action Representation.
Tettamanti, Marco; Conca, Francesca; Falini, Andrea; Perani, Daniela
2017-11-01
The hypothesis that the brain constitutively encodes observed manipulable objects for the actions they afford is still debated. Yet, crucial evidence demonstrating that, even in the absence of perceptual awareness, the mere visual appearance of a manipulable object triggers a visuomotor coding in the action representation system including the premotor cortex, has hitherto not been provided. In this fMRI study, we instantiated reliable unaware visual perception conditions by means of continuous flash suppression, and we tested in 24 healthy human participants (13 females) whether the visuomotor object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices is activated even under subliminal perceptual conditions. We found consistent activation in the target visuomotor cortices, both with and without perceptual awareness, specifically for pictures of manipulable versus non-manipulable objects. By means of a multivariate searchlight analysis, we also found that the brain activation patterns in this visuomotor network enabled the decoding of manipulable versus non-manipulable object picture processing, both with and without awareness. These findings demonstrate the intimate neural coupling between visual perception and motor representation that underlies manipulable object processing: manipulable object stimuli specifically engage the visuomotor object-directed action representation system, in a constitutive manner that is independent from perceptual awareness. This perceptuo-motor coupling endows the brain with an efficient mechanism for monitoring and planning reactions to external stimuli in the absence of awareness. SIGNIFICANCE STATEMENT Our brain constantly encodes the visual information that hits the retina, leading to a stimulus-specific activation of sensory and semantic representations, even for objects that we do not consciously perceive. Do these unconscious representations encompass the motor programming of actions that could be accomplished congruently with the objects' functions? In this fMRI study, we instantiated unaware visual perception conditions, by dynamically suppressing the visibility of manipulable object pictures with mondrian masks. Despite escaping conscious perception, manipulable objects activated an object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices. This demonstrates that visuomotor encoding occurs independently of conscious object perception. Copyright © 2017 the authors 0270-6474/17/3710712-13$15.00/0.
Communication, concepts and grounding.
van der Velde, Frank
2015-02-01
This article discusses the relation between communication and conceptual grounding. In the brain, neurons, circuits and brain areas are involved in the representation of a concept, grounding it in perception and action. In terms of grounding we can distinguish between communication within the brain and communication between humans or between humans and machines. In the first form of communication, a concept is activated by sensory input. Due to grounding, the information provided by this communication is not just determined by the sensory input but also by the outgoing connection structure of the conceptual representation, which is based on previous experiences and actions. The second form of communication, that between humans or between humans and machines, is influenced by the first form. In particular, a more successful interpersonal communication might require forms of situated cognition and interaction in which the entire representations of grounded concepts are involved. Copyright © 2014 Elsevier Ltd. All rights reserved.
Memory Retrieval in Mice and Men
Ben-Yakov, Aya; Dudai, Yadin; Mayford, Mark R.
2015-01-01
Retrieval, the use of learned information, was until recently mostly terra incognita in the neurobiology of memory, owing to shortage of research methods with the spatiotemporal resolution required to identify and dissect fast reactivation or reconstruction of complex memories in the mammalian brain. The development of novel paradigms, model systems, and new tools in molecular genetics, electrophysiology, optogenetics, in situ microscopy, and functional imaging, have contributed markedly in recent years to our ability to investigate brain mechanisms of retrieval. We review selected developments in the study of explicit retrieval in the rodent and human brain. The picture that emerges is that retrieval involves coordinated fast interplay of sparse and distributed corticohippocampal and neocortical networks that may permit permutational binding of representational elements to yield specific representations. These representations are driven largely by the activity patterns shaped during encoding, but are malleable, subject to the influence of time and interaction of the existing memory with novel information. PMID:26438596
Identifying bilingual semantic neural representations across languages
Buchweitz, Augusto; Shinkareva, Svetlana V.; Mason, Robert A.; Mitchell, Tom M.; Just, Marcel Adam
2015-01-01
The goal of the study was to identify the neural representation of a noun's meaning in one language based on the neural representation of that same noun in another language. Machine learning methods were used to train classifiers to identify which individual noun bilingual participants were thinking about in one language based solely on their brain activation in the other language. The study shows reliable (p < .05) pattern-based classification accuracies for the classification of brain activity for nouns across languages. It also shows that the stable voxels used to classify the brain activation were located in areas associated with encoding information about semantic dimensions of the words in the study. The identification of the semantic trace of individual nouns from the pattern of cortical activity demonstrates the existence of a multi-voxel pattern of activation across the cortex for a single noun common to both languages in bilinguals. PMID:21978845
Neural representations of close others in collectivistic brains
Wang, Gang; Mao, Lihua; Ma, Yina; Yang, Xuedong; Cao, Jingqian; Liu, Xi; Wang, Jinzhao; Wang, Xiaoying
2012-01-01
Our recent work showed that close relationships result in shared cognitive and neural representations of the self and one’s mother in collectivistic individuals (Zhu et al., 2007, Neuroimage, 34, 1310–7). However, it remains unknown whether close others, such as mother, father and best friend, are differentially represented in collectivistic brains. Here, using functional magnetic resonance imaging and a trait judgment task, we showed evidence that, while trait judgments of the self and mother generated comparable activity in the medial prefrontal cortex (MPFC) and anterior cingulate (ACC) of Chinese adults, trait judgments of mother induced greater MPFC/ACC activity than trait judgments of father and best friend. Our results suggest that, while neural representations of the self and mother overlapped in the MPFC/ACC, close others such as mother, father and best friend are unequally represented in the MPFC/ACC of collectivistic brains. PMID:21382966
Tani, Jun; Nishimoto, Ryunosuke; Paine, Rainer W
2008-05-01
The current paper examines how compositional structures can self-organize in given neuro-dynamical systems when robot agents are forced to learn multiple goal-directed behaviors simultaneously. Firstly, we propose a basic model accounting for the roles of parietal-premotor interactions for representing skills for goal-directed behaviors. The basic model had been implemented in a set of robotics experiments employing different neural network architectures. The comparative reviews among those experimental results address the issues of local vs distributed representations in representing behavior and the effectiveness of level structures associated with different sensory-motor articulation mechanisms. It is concluded that the compositional structures can be acquired "organically" by achieving generalization in learning and by capturing the contextual nature of skilled behaviors under specific conditions. Furthermore, the paper discusses possible feedback for empirical neuroscience studies in the future.
ERIC Educational Resources Information Center
Ekdahl, Anna-Lena; Venkat, Hamsa; Runesson, Ulla
2016-01-01
In this article, we present a coding framework based on simultaneity and connections. The coding focuses on microlevel attention to three aspects of simultaneity and connections: between representations, within examples, and between examples. Criteria for coding that we viewed as mathematically important within part-whole additive relations…
The wheelchair as a full-body tool extending the peripersonal space
Galli, Giulia; Noel, Jean Paul; Canzoneri, Elisa; Blanke, Olaf; Serino, Andrea
2015-01-01
Dedicated multisensory mechanisms in the brain represent peripersonal space (PPS), a limited portion of space immediately surrounding the body. Previous studies have illustrated the malleability of PPS representation through hand-object interaction, showing that tool use extends the limits of the hand-centered PPS. In the present study we investigated the effects of a special tool, the wheelchair, in extending the action possibilities of the whole body. We used a behavioral measure to quantify the extension of the PPS around the body before and after Active (Experiment 1) and Passive (Experiment 2) training with a wheelchair and when participants were blindfolded (Experiment 3). Results suggest that a wheelchair-mediated passive exploration of far space extended PPS representation. This effect was specifically related to the possibility of receiving information from the environment through vision, since no extension effect was found when participants were blindfolded. Surprisingly, the active motor training did not induce any modification in PPS representation, probably because the wheelchair maneuver was demanding for non-expert users and thus they may have prioritized processing of information from close to the wheelchair rather than at far spatial locations. Our results suggest that plasticity in PPS representation after tool use seems not to strictly depend on active use of the tool itself, but is triggered by simultaneous processing of information from the body and the space where the body acts in the environment, which is more extended in the case of wheelchair use. These results contribute to our understanding of the mechanisms underlying body–environment interaction for developing and improving applications of assistive technological devices in different clinical populations. PMID:26042069
Garbarini, Francesca; Pia, Lorenzo
2013-11-05
When humans move simultaneously both hands strong coupling effects arise and neither of the two hands is able to perform independent actions. It has been suggested that such motor constraints are tightly linked to action representation rather than to movement execution. Hence, bimanual tasks can represent an ideal experimental tool to investigate internal motor representations in those neurological conditions in which the movement of one hand is impaired. Indeed, any effect on the "moving" (healthy) hand would be caused by the constraints imposed by the ongoing motor program of the 'impaired' hand. Here, we review recent studies that successfully utilized the above-mentioned paradigms to investigate some types of productive motor behaviors in stroke patients. Specifically, bimanual tasks have been employed in left hemiplegic patients who report illusory movements of their contralesional limbs (anosognosia for hemiplegia). They have also been administered to patients affected by a specific monothematic delusion of body ownership, namely the belief that another person's arm and his/her voluntary action belong to them. In summary, the reviewed studies show that bimanual tasks are a simple and valuable experimental method apt to reveal information about the motor programs of a paralyzed limb. Therefore, it can be used to objectively examine the cognitive processes underpinning motor programming in patients with different delusions of motor behavior. Additionally, it also sheds light on the mechanisms subserving bimanual coordination in the intact brain suggesting that action representation might be sufficient to produce these effects.
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
Neuroscience of affect: Brain mechanisms of pleasure and displeasure
Berridge, Kent C.; Kringelbach, Morten L.
2013-01-01
Affective neuroscience aims to understand how affect (pleasure or displeasure) is created by brains. Progress is aided by recognizing that affect has both objective and subjective features. Those dual aspects reflect that affective reactions are generated by neural mechanisms, selected in evolution based on their real (objective) consequences for genetic fitness. We review evidence for neural representation of pleasure in the brain (gained largely from neuroimaging studies), and evidence for the causal generation of pleasure (gained largely from brain manipulation studies). We suggest that representation and causation may actually reflect somewhat separable neuropsychological functions. Representation reaches an apex in limbic regions of prefrontal cortex, especially orbitofrontal cortex, influencing decisions and affective regulation. Causation of core pleasure or liking reactions is much more subcortically weighted, and sometimes surprisingly localized. Pleasure liking is especially generated by restricted hedonic hotspot circuits in nucleus accumbens and ventral pallidum. Another example of localized valence generation, beyond hedonic hotspots, is an affective keyboard mechanism in nucleus accumbens for releasing intense motivations such as either positively-valenced desire and/or negatively-valenced dread. PMID:23375169
Just, Marcel Adam; Wang, Jing; Cherkassky, Vladimir L
2017-08-15
Although it has been possible to identify individual concepts from a concept's brain activation pattern, there have been significant obstacles to identifying a proposition from its fMRI signature. Here we demonstrate the ability to decode individual prototype sentences from readers' brain activation patterns, by using theory-driven regions of interest and semantic properties. It is possible to predict the fMRI brain activation patterns evoked by propositions and words which are entirely new to the model with reliably above-chance rank accuracy. The two core components implemented in the model that reflect the theory were the choice of intermediate semantic features and the brain regions associated with the neurosemantic dimensions. This approach also predicts the neural representation of object nouns across participants, studies, and sentence contexts. Moreover, we find that the neural representation of an agent-verb-object proto-sentence is more accurately characterized by the neural signatures of its components as they occur in a similar context than by the neural signatures of these components as they occur in isolation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical Ordering Ability Mediates the Relation between Number-Sense and Arithmetic Competence
ERIC Educational Resources Information Center
Lyons, Ian M.; Beilock, Sian L.
2011-01-01
What predicts human mathematical competence? While detailed models of number representation in the brain have been developed, it remains to be seen exactly how basic number representations link to higher math abilities. We propose that representation of ordinal associations between numerical symbols is one important factor that underpins this…
ERIC Educational Resources Information Center
Fedorenko, Evelina; Nieto-Castanon, Alfonso; Kanwisher, Nancy
2012-01-01
Work in theoretical linguistics and psycholinguistics suggests that human linguistic knowledge forms a continuum between individual lexical items and abstract syntactic representations, with most linguistic representations falling between the two extremes and taking the form of lexical items stored together with the syntactic/semantic contexts in…
Modality-independent representations of small quantities based on brain activation patterns.
Damarla, Saudamini Roy; Cherkassky, Vladimir L; Just, Marcel Adam
2016-04-01
Machine learning or MVPA (Multi Voxel Pattern Analysis) studies have shown that the neural representation of quantities of objects can be decoded from fMRI patterns, in cases where the quantities were visually displayed. Here we apply these techniques to investigate whether neural representations of quantities depicted in one modality (say, visual) can be decoded from brain activation patterns evoked by quantities depicted in the other modality (say, auditory). The main finding demonstrated, for the first time, that quantities of dots were decodable by a classifier that was trained on the neural patterns evoked by quantities of auditory tones, and vice-versa. The representations that were common across modalities were mainly right-lateralized in frontal and parietal regions. A second finding was that the neural patterns in parietal cortex that represent quantities were common across participants. These findings demonstrate a common neuronal foundation for the representation of quantities across sensory modalities and participants and provide insight into the role of parietal cortex in the representation of quantity information. © 2016 Wiley Periodicals, Inc.
Mondragón, Esther; Gray, Jonathan; Alonso, Eduardo; Bonardi, Charlotte; Jennings, Dómhnall J.
2014-01-01
This paper presents a novel representational framework for the Temporal Difference (TD) model of learning, which allows the computation of configural stimuli – cumulative compounds of stimuli that generate perceptual emergents known as configural cues. This Simultaneous and Serial Configural-cue Compound Stimuli Temporal Difference model (SSCC TD) can model both simultaneous and serial stimulus compounds, as well as compounds including the experimental context. This modification significantly broadens the range of phenomena which the TD paradigm can explain, and allows it to predict phenomena which traditional TD solutions cannot, particularly effects that depend on compound stimuli functioning as a whole, such as pattern learning and serial structural discriminations, and context-related effects. PMID:25054799
Beck, Valerie M; Hollingworth, Andrew
2017-02-01
The content of visual working memory (VWM) guides attention, but whether this interaction is limited to a single VWM representation or functional for multiple VWM representations is under debate. To test this issue, we developed a gaze-contingent search paradigm to directly manipulate selection history and examine the competition between multiple cue-matching saccade target objects. Participants first saw a dual-color cue followed by two pairs of colored objects presented sequentially. For each pair, participants selectively fixated an object that matched one of the cued colors. Critically, for the second pair, the cued color from the first pair was presented either with a new distractor color or with the second cued color. In the latter case, if two cued colors in VWM interact with selection simultaneously, we expected the second cued color object to generate substantial competition for selection, even though the first cued color was used to guide attention in the immediately previous pair. Indeed, in the second pair, selection probability of the first cued color was substantially reduced in the presence of the second cued color. This competition between cue-matching objects provides strong evidence that both VWM representations interacted simultaneously with selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Representation and presentation of requirements knowledge
NASA Technical Reports Server (NTRS)
Johnson, W. L.; Feather, Martin S.; Harris, David R.
1992-01-01
An approach to representation and presentation of knowledge used in the ARIES, an experimental requirements/specification environment, is described. The approach applies the notion of a representation architecture to the domain of software engineering and incorporates a strong coupling to a transformation system. It is characterized by a single highly expressive underlying representation, interfaced simultaneously to multiple presentations, each with notations of differing degrees of expressivity. This enables analysts to use multiple languages for describing systems and have these descriptions yield a single consistent model of the system.
Krause, Florian; Lindemann, Oliver; Toni, Ivan; Bekkering, Harold
2014-04-01
A dominant hypothesis on how the brain processes numerical size proposes a spatial representation of numbers as positions on a "mental number line." An alternative hypothesis considers numbers as elements of a generalized representation of sensorimotor-related magnitude, which is not obligatorily spatial. Here we show that individuals' relative use of spatial and nonspatial representations has a cerebral counterpart in the structural organization of the posterior parietal cortex. Interindividual variability in the linkage between numbers and spatial responses (faster left responses to small numbers and right responses to large numbers; spatial-numerical association of response codes effect) correlated with variations in gray matter volume around the right precuneus. Conversely, differences in the disposition to link numbers to force production (faster soft responses to small numbers and hard responses to large numbers) were related to gray matter volume in the left angular gyrus. This finding suggests that numerical cognition relies on multiple mental representations of analogue magnitude using different neural implementations that are linked to individual traits.
Reward Systems in the Brain and Nutrition.
Rolls, Edmund T
2016-07-17
The taste cortex in the anterior insula provides separate and combined representations of the taste, temperature, and texture of food in the mouth independently of hunger and thus of reward value and pleasantness. One synapse on, in the orbitofrontal cortex, these sensory inputs are combined by associative learning with olfactory and visual inputs for some neurons, and these neurons encode food reward value in that they respond to food only when hunger is present and in that activations correlate linearly with subjective pleasantness. Cognitive factors, including word-level descriptions and selective attention to affective value, modulate the representation of the reward value of taste, olfactory, and flavor stimuli in the orbitofrontal cortex and a region to which it projects, the anterior cingulate cortex. These food reward representations are important in the control of appetite and food intake. Individual differences in reward representations may contribute to obesity, and there are age-related differences in these reward representations. Implications of how reward systems in the brain operate for understanding, preventing, and treating obesity are described.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-01-01
Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.
Some Problems for Representations of Brain Organization Based on Activation in Functional Imaging
ERIC Educational Resources Information Center
Sidtis, John J.
2007-01-01
Functional brain imaging has overshadowed traditional lesion studies in becoming the dominant approach to the study of brain-behavior relationships. The proponents of functional imaging studies frequently argue that this approach provides an advantage over lesion studies by observing normal brain activity in vivo without the disruptive effects of…
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Feng, S. F.; Schwemmer, M.; Gershman, S. J.; Cohen, J. D.
2014-01-01
Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few — and usually only a single — control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing, it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a tradeoff between the flexibility and efficiency of representation (“multiplexing”) and the simultaneous engagement of different processing pathways (“multitasking”). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes. PMID:24481850
Feng, S F; Schwemmer, M; Gershman, S J; Cohen, J D
2014-03-01
Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few-and usually only a single-control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing; it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a trade-off between the flexibility and efficiency of representation ("multiplexing") and the simultaneous engagement of different processing pathways ("multitasking"). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes.
Zilverstand, Anna; Sorger, Bettina; Kaemingk, Anita; Goebel, Rainer
2017-06-01
We employed a novel parametric spider picture set in the context of a parametric fMRI anxiety provocation study, designed to tease apart brain regions involved in threat monitoring from regions representing an exaggerated anxiety response in spider phobics. For the stimulus set, we systematically manipulated perceived proximity of threat by varying a depicted spider's context, size, and posture. All stimuli were validated in a behavioral rating study (phobics n = 20; controls n = 20; all female). An independent group participated in a subsequent fMRI anxiety provocation study (phobics n = 7; controls n = 7; all female), in which we compared a whole-brain categorical to a whole-brain parametric analysis. Results demonstrated that the parametric analysis provided a richer characterization of the functional role of the involved brain networks. In three brain regions-the mid insula, the dorsal anterior cingulate, and the ventrolateral prefrontal cortex-activation was linearly modulated by perceived proximity specifically in the spider phobia group, indicating a quantitative representation of an exaggerated anxiety response. In other regions (e.g., the amygdala), activation was linearly modulated in both groups, suggesting a functional role in threat monitoring. Prefrontal regions, such as dorsolateral prefrontal cortex, were activated during anxiety provocation but did not show a stimulus-dependent linear modulation in either group. The results confirm that brain regions involved in anxiety processing hold a quantitative representation of a pathological anxiety response and more generally suggest that parametric fMRI designs may be a very powerful tool for clinical research in the future, particularly when developing novel brain-based interventions (e.g., neurofeedback training). Hum Brain Mapp 38:3025-3038, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Invisible Brain: Knowledge in Research Works and Neuron Activity
Segev, Aviv; Curtis, Dorothy; Jung, Sukhwan; Chae, Suhyun
2016-01-01
If the market has an invisible hand, does knowledge creation and representation have an “invisible brain”? While knowledge is viewed as a product of neuron activity in the brain, can we identify knowledge that is outside the brain but reflects the activity of neurons in the brain? This work suggests that the patterns of neuron activity in the brain can be seen in the representation of knowledge-related activity. Here we show that the neuron activity mechanism seems to represent much of the knowledge learned in the past decades based on published articles, in what can be viewed as an “invisible brain” or collective hidden neural networks. Similar results appear when analyzing knowledge activity in patents. Our work also tries to characterize knowledge increase as neuron network activity growth. The results propose that knowledge-related activity can be seen outside of the neuron activity mechanism. Consequently, knowledge might exist as an independent mechanism. PMID:27439199
Kriegeskorte, Nikolaus
2015-11-24
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
Fedorenko, Evelina; Nieto-Castañon, Alfonso; Kanwisher, Nancy
2011-01-01
Work in theoretical linguistics and psycholinguistics suggests that human linguistic knowledge forms a continuum between individual lexical items and abstract syntactic representations, with most linguistic representations falling between the two extremes and taking the form of lexical items stored together with the syntactic/semantic contexts in which they frequently occur. Neuroimaging evidence further suggests that no brain region is selectively sensitive to only lexical information or only syntactic information. Instead, all the key brain regions that support high-level linguistic processing have been implicated in both lexical and syntactic processing, suggesting that our linguistic knowledge is plausibly represented in a distributed fashion in these brain regions. Given this distributed nature of linguistic representations, multi-voxel pattern analyses (MVPAs) can help uncover important functional properties of the language system. In the current study we use MVPAs to ask two questions: 1) Do language brain regions differ in how robustly they represent lexical vs. syntactic information?; and 2) Do any of the language bran regions distinguish between “pure” lexical information (lists of words) and “pure” abstract syntactic information (jabberwocky sentences) in the pattern of activity? We show that lexical information is represented more robustly than syntactic information across many language regions (with no language region showing the opposite pattern), as evidenced by a better discrimination between conditions that differ along the lexical dimension (sentences vs. jabberwocky, and word lists vs. nonword lists) than between conditions that differ along the syntactic dimension (sentences vs. word lists, and jabberwocky vs. nonword lists). This result suggests that lexical information may play a more critical role than syntax in the representation of linguistic meaning. We also show that several language regions reliably discriminate between “pure” lexical information and “pure” abstract syntactic information in their patterns of neural activity. PMID:21945850
Fedorenko, Evelina; Nieto-Castañon, Alfonso; Kanwisher, Nancy
2012-03-01
Work in theoretical linguistics and psycholinguistics suggests that human linguistic knowledge forms a continuum between individual lexical items and abstract syntactic representations, with most linguistic representations falling between the two extremes and taking the form of lexical items stored together with the syntactic/semantic contexts in which they frequently occur. Neuroimaging evidence further suggests that no brain region is selectively sensitive to only lexical information or only syntactic information. Instead, all the key brain regions that support high-level linguistic processing have been implicated in both lexical and syntactic processing, suggesting that our linguistic knowledge is plausibly represented in a distributed fashion in these brain regions. Given this distributed nature of linguistic representations, multi-voxel pattern analyses (MVPAs) can help uncover important functional properties of the language system. In the current study we use MVPAs to ask two questions: (1) Do language brain regions differ in how robustly they represent lexical vs. syntactic information? and (2) Do any of the language bran regions distinguish between "pure" lexical information (lists of words) and "pure" abstract syntactic information (jabberwocky sentences) in the pattern of activity? We show that lexical information is represented more robustly than syntactic information across many language regions (with no language region showing the opposite pattern), as evidenced by a better discrimination between conditions that differ along the lexical dimension (sentences vs. jabberwocky, and word lists vs. nonword lists) than between conditions that differ along the syntactic dimension (sentences vs. word lists, and jabberwocky vs. nonword lists). This result suggests that lexical information may play a more critical role than syntax in the representation of linguistic meaning. We also show that several language regions reliably discriminate between "pure" lexical information and "pure" abstract syntactic information in their patterns of neural activity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dissociable patterns of brain activity for mentalizing about known others: a role for attachment
Laurita, Anne C.; Hazan, Cindy
2017-01-01
Abstract The human brain tracks dynamic changes within the social environment, forming and updating representations of individuals in our social milieu. This mechanism of social navigation builds an increasingly complex map of persons with whom we are familiar and form attachments to guide adaptive social behaviors. We examined the neural representation of known others along a continuum of attachment using fMRI. Heterosexual adults (N = 29, 16 females), in romantic relationships for more than 2 years, made trait judgments for a romantic partner, parent, close friend, familiar acquaintance and self-during scanning. Multivariate analysis, partial least squares, was used to identify whole-brain patterns of brain activation associated with trait judgments of known others across a continuum of attachment. Across conditions, trait judgments engaged the default network and lateral prefrontal cortex. Judgments about oneself and a partner were associated with a common activation pattern encompassing anterior and middle cingulate, posterior superior temporal sulcus, as well as anterior insula. Parent and close friend judgments engaged medial and anterior temporal lobe regions. These results provide novel evidence that mentalizing about known familiar others results in differential brain activity. We provide initial evidence that the representation of adult attachment is a distinguishing feature of these differences. PMID:28407150
Inagaki, Mikio; Fujita, Ichiro
2011-07-13
Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.
The informatics of a C57BL/6J mouse brain atlas.
MacKenzie-Graham, Allan; Jones, Eagle S; Shattuck, David W; Dinov, Ivo D; Bota, Mihail; Toga, Arthur W
2003-01-01
The Mouse Atlas Project (MAP) aims to produce a framework for organizing and analyzing the large volumes of neuroscientific data produced by the proliferation of genetically modified animals. Atlases provide an invaluable aid in understanding the impact of genetic manipulations by providing a standard for comparison. We use a digital atlas as the hub of an informatics network, correlating imaging data, such as structural imaging and histology, with text-based data, such as nomenclature, connections, and references. We generated brain volumes using magnetic resonance microscopy (MRM), classical histology, and immunohistochemistry, and registered them into a common and defined coordinate system. Specially designed viewers were developed in order to visualize multiple datasets simultaneously and to coordinate between textual and image data. Researchers can navigate through the brain interchangeably, in either a text-based or image-based representation that automatically updates information as they move. The atlas also allows the independent entry of other types of data, the facile retrieval of information, and the straight-forward display of images. In conjunction with centralized servers, image and text data can be kept current and can decrease the burden on individual researchers' computers. A comprehensive framework that encompasses many forms of information in the context of anatomic imaging holds tremendous promise for producing new insights. The atlas and associated tools can be found at http://www.loni.ucla.edu/MAP.
Naito, Eiichi; Morita, Tomoyo; Amemiya, Kaoru
2016-03-01
The human brain can generate a continuously changing postural model of our body. Somatic (proprioceptive) signals from skeletal muscles and joints contribute to the formation of the body representation. Recent neuroimaging studies of proprioceptive bodily illusions have elucidated the importance of three brain systems (motor network, specialized parietal systems, right inferior fronto-parietal network) in the formation of the human body representation. The motor network, especially the primary motor cortex, processes afferent input from skeletal muscles. Such information may contribute to the formation of kinematic/dynamic postural models of limbs, thereby enabling fast online feedback control. Distinct parietal regions appear to play specialized roles in the transformation/integration of information across different coordinate systems, which may subserve the adaptability and flexibility of the body representation. Finally, the right inferior fronto-parietal network, connected by the inferior branch of the superior longitudinal fasciculus, is consistently recruited when an individual experiences various types of bodily illusions and its possible roles relate to corporeal awareness, which is likely elicited through a series of neuronal processes of monitoring and accumulating bodily information and updating the body representation. Because this network is also recruited when identifying one's own features, the network activity could be a neuronal basis for self-consciousness. Copyright © 2015 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
ERIC Educational Resources Information Center
Westermann, Gert; Mareschal, Denis; Johnson, Mark H.; Sirois, Sylvain; Spratling, Michael W.; Thomas, Michael S. C.
2007-01-01
Neuroconstructivism is a theoretical framework focusing on the construction of representations in the developing brain. Cognitive development is explained as emerging from the experience-dependent development of neural structures supporting mental representations. Neural development occurs in the context of multiple interacting constraints acting…
Emotion, Cognition, and Mental State Representation in Amygdala and Prefrontal Cortex
Salzman, C. Daniel; Fusi, Stefano
2011-01-01
Neuroscientists have often described cognition and emotion as separable processes implemented by different regions of the brain, such as the amygdala for emotion and the prefrontal cortex for cognition. In this framework, functional interactions between the amygdala and prefrontal cortex mediate emotional influences on cognitive processes such as decision-making, as well as the cognitive regulation of emotion. However, neurons in these structures often have entangled representations, whereby single neurons encode multiple cognitive and emotional variables. Here we review studies using anatomical, lesion, and neurophysiological approaches to investigate the representation and utilization of cognitive and emotional parameters. We propose that these mental state parameters are inextricably linked and represented in dynamic neural networks composed of interconnected prefrontal and limbic brain structures. Future theoretical and experimental work is required to understand how these mental state representations form and how shifts between mental states occur, a critical feature of adaptive cognitive and emotional behavior. PMID:20331363
Human white matter and knowledge representation
2018-01-01
Understanding how knowledge is represented in the human brain is a fundamental challenge in neuroscience. To date, most of the work on this topic has focused on knowledge representation in cortical areas and debated whether knowledge is represented in a distributed or localized fashion. Fang and colleagues provide evidence that brain connections and the white matter supporting such connections might play a significant role. The work opens new avenues of investigation, breaking through disciplinary boundaries across network neuroscience, computational neuroscience, cognitive science, and classical lesion studies. PMID:29698391
Human white matter and knowledge representation.
Pestilli, Franco
2018-04-01
Understanding how knowledge is represented in the human brain is a fundamental challenge in neuroscience. To date, most of the work on this topic has focused on knowledge representation in cortical areas and debated whether knowledge is represented in a distributed or localized fashion. Fang and colleagues provide evidence that brain connections and the white matter supporting such connections might play a significant role. The work opens new avenues of investigation, breaking through disciplinary boundaries across network neuroscience, computational neuroscience, cognitive science, and classical lesion studies.
ERIC Educational Resources Information Center
Larruy, Martine Marquillo
2000-01-01
This article concentrates on the use of metaphors characterizing a multilingual brain in a corpus of oral interactions drawn from the Andorran part of an international research study. First, the situation and the status of metaphors in fields connected to the elaboration of knowledge is questioned. Next, the most important metaphors associated to…
Milde, Christopher; Rance, Mariela; Kirsch, Pinar; Trojan, Jörg; Fuchs, Xaver; Foell, Jens; Bekrater-Bodmann, Robin
2015-01-01
Since its original proposal, mirror therapy has been established as a successful neurorehabilitative intervention in several neurological disorders to recover motor function or to relieve pain. Mirror therapy seems to operate by reactivating the contralesional representation of the non-mirrored limb in primary motor- and somatosensory cortex. However, mirror boxes have some limitations which prompted the use of additional mirror visual feedback devices. The present study evaluated the utility of mirror glasses compared to a mirror box. We also tested the hypothesis that increased interhemispheric communication between the motor hand areas is the mechanism by which mirror visual feedback recruits the representation of the non-mirrored limb. Therefore, mirror illusion capacity and brain activations were measured in a within-subject design during both mirror visual feedback conditions in counterbalanced order with 20 healthy subjects inside a magnetic resonance imaging scanner. Furthermore, we analyzed task-dependent functional connectivity between motor hand representations using psychophysiological interaction analysis during both mirror tasks. Neither the subjective quality of mirror illusions nor the patterns of functional brain activation differed between the mirror tasks. The sensorimotor representation of the non-mirrored hand was recruited in both mirror tasks. However, a significant increase in interhemispheric connectivity between the hand areas was only observed in the mirror glasses condition, suggesting different mechanisms for the recruitment of the representation of the non-mirrored hand in the two mirror tasks. We conclude that the mirror glasses might be a promising alternative to the mirror box, as they induce similar patterns of brain activation. Moreover, the mirror glasses can be easy applied in therapy and research. We want to emphasize that the neuronal mechanisms for the recruitment of the affected limb representation might differ depending on conceptual differences between MVF devices. However, our findings need to be validated within specific patient groups. PMID:26018572
Gómez-Velázquez, Fabiola R; Vélez-Pérez, Hugo; Espinoza-Valdez, Aurora; Romo-Vazquez, Rebeca; Salido-Ruiz, Ricardo A; Ruiz-Stovel, Vanessa; Gallardo-Moreno, Geisa B; González-Garrido, Andrés A; Berumen, Gustavo
2017-02-08
Children with mathematical difficulties usually have an impaired ability to process symbolic representations. Functional MRI methods have suggested that early frontoparietal connectivity can predict mathematic achievements; however, the study of brain connectivity during numerical processing remains unexplored. With the aim of evaluating this in children with different math proficiencies, we selected a sample of 40 children divided into two groups [high achievement (HA) and low achievement (LA)] according to their arithmetic scores in the Wide Range Achievement Test, 4th ed.. Participants performed a symbolic magnitude comparison task (i.e. determining which of two numbers is numerically larger), with simultaneous electrophysiological recording. Partial directed coherence and graph theory methods were used to estimate and depict frontoparietal connectivity in both groups. The behavioral measures showed that children with LA performed significantly slower and less accurately than their peers in the HA group. Significantly higher frontocentral connectivity was found in LA compared with HA; however, when the connectivity analysis was restricted to parietal locations, no relevant group differences were observed. These findings seem to support the notion that LA children require greater memory and attentional efforts to meet task demands, probably affecting early stages of symbolic comparison.
Keller, Peter E; Novembre, Giacomo; Hove, Michael J
2014-12-19
Human interaction often requires simultaneous precision and flexibility in the coordination of rhythmic behaviour between individuals engaged in joint activity, for example, playing a musical duet or dancing with a partner. This review article addresses the psychological processes and brain mechanisms that enable such rhythmic interpersonal coordination. First, an overview is given of research on the cognitive-motor processes that enable individuals to represent joint action goals and to anticipate, attend and adapt to other's actions in real time. Second, the neurophysiological mechanisms that underpin rhythmic interpersonal coordination are sought in studies of sensorimotor and cognitive processes that play a role in the representation and integration of self- and other-related actions within and between individuals' brains. Finally, relationships between social-psychological factors and rhythmic interpersonal coordination are considered from two perspectives, one concerning how social-cognitive tendencies (e.g. empathy) affect coordination, and the other concerning how coordination affects interpersonal affiliation, trust and prosocial behaviour. Our review highlights musical ensemble performance as an ecologically valid yet readily controlled domain for investigating rhythm in joint action. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
ConnectViz: Accelerated Approach for Brain Structural Connectivity Using Delaunay Triangulation.
Adeshina, A M; Hashim, R
2016-03-01
Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using compute unified device architecture, extending the previously proposed SurLens Visualization and computer aided hepatocellular carcinoma frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, USA. Significantly, our proposed framework is able to generate and extract points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.
ConnectViz: Accelerated approach for brain structural connectivity using Delaunay triangulation.
Adeshina, A M; Hashim, R
2015-02-06
Stroke is a cardiovascular disease with high mortality and long-term disability in the world. Normal functioning of the brain is dependent on the adequate supply of oxygen and nutrients to the brain complex network through the blood vessels. Stroke, occasionally a hemorrhagic stroke, ischemia or other blood vessel dysfunctions can affect patients during a cerebrovascular incident. Structurally, the left and the right carotid arteries, and the right and the left vertebral arteries are responsible for supplying blood to the brain, scalp and the face. However, a number of impairment in the function of the frontal lobes may occur as a result of any decrease in the flow of the blood through one of the internal carotid arteries. Such impairment commonly results in numbness, weakness or paralysis. Recently, the concepts of brain's wiring representation, the connectome, was introduced. However, construction and visualization of such brain network requires tremendous computation. Consequently, previously proposed approaches have been identified with common problems of high memory consumption and slow execution. Furthermore, interactivity in the previously proposed frameworks for brain network is also an outstanding issue. This study proposes an accelerated approach for brain connectomic visualization based on graph theory paradigm using Compute Unified Device Architecture (CUDA), extending the previously proposed SurLens Visualization and Computer Aided Hepatocellular Carcinoma (CAHECA) frameworks. The accelerated brain structural connectivity framework was evaluated with stripped brain datasets from the Department of Surgery, University of North Carolina, Chapel Hill, United States. Significantly, our proposed framework is able to generates and extracts points and edges of datasets, displays nodes and edges in the datasets in form of a network and clearly maps data volume to the corresponding brain surface. Moreover, with the framework, surfaces of the dataset were simultaneously displayed with the nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.
Elmer, Stefan; Klein, Carina; Kühnis, Jürg; Liem, Franziskus; Meyer, Martin; Jäncke, Lutz
2014-10-01
In this study, we used high-density EEG to evaluate whether speech and music expertise has an influence on the categorization of expertise-related and unrelated sounds. With this purpose in mind, we compared the categorization of speech, music, and neutral sounds between professional musicians, simultaneous interpreters (SIs), and controls in response to morphed speech-noise, music-noise, and speech-music continua. Our hypothesis was that music and language expertise will strengthen the memory representations of prototypical sounds, which act as a perceptual magnet for morphed variants. This means that the prototype would "attract" variants. This so-called magnet effect should be manifested by an increased assignment of morphed items to the trained category, by a reduced maximal slope of the psychometric function, as well as by differential event-related brain responses reflecting memory comparison processes (i.e., N400 and P600 responses). As a main result, we provide first evidence for a domain-specific behavioral bias of musicians and SIs toward the trained categories, namely music and speech. In addition, SIs showed a bias toward musical items, indicating that interpreting training has a generic influence on the cognitive representation of spectrotemporal signals with similar acoustic properties to speech sounds. Notably, EEG measurements revealed clear distinct N400 and P600 responses to both prototypical and ambiguous items between the three groups at anterior, central, and posterior scalp sites. These differential N400 and P600 responses represent synchronous activity occurring across widely distributed brain networks, and indicate a dynamical recruitment of memory processes that vary as a function of training and expertise.
Neural dynamics and information representation in microcircuits of motor cortex.
Tsubo, Yasuhiro; Isomura, Yoshikazu; Fukai, Tomoki
2013-01-01
The brain has to analyze and respond to external events that can change rapidly from time to time, suggesting that information processing by the brain may be essentially dynamic rather than static. The dynamical features of neural computation are of significant importance in motor cortex that governs the process of movement generation and learning. In this paper, we discuss these features based primarily on our recent findings on neural dynamics and information coding in the microcircuit of rat motor cortex. In fact, cortical neurons show a variety of dynamical behavior from rhythmic activity in various frequency bands to highly irregular spike firing. Of particular interest are the similarity and dissimilarity of the neuronal response properties in different layers of motor cortex. By conducting electrophysiological recordings in slice preparation, we report the phase response curves (PRCs) of neurons in different cortical layers to demonstrate their layer-dependent synchronization properties. We then study how motor cortex recruits task-related neurons in different layers for voluntary arm movements by simultaneous juxtacellular and multiunit recordings from behaving rats. The results suggest an interesting difference in the spectrum of functional activity between the superficial and deep layers. Furthermore, the task-related activities recorded from various layers exhibited power law distributions of inter-spike intervals (ISIs), in contrast to a general belief that ISIs obey Poisson or Gamma distributions in cortical neurons. We present a theoretical argument that this power law of in vivo neurons may represent the maximization of the entropy of firing rate with limited energy consumption of spike generation. Though further studies are required to fully clarify the functional implications of this coding principle, it may shed new light on information representations by neurons and circuits in motor cortex.
Danker, Jared F; Anderson, John R
2007-04-15
In naturalistic algebra problem solving, the cognitive processes of representation and retrieval are typically confounded, in that transformations of the equations typically require retrieval of mathematical facts. Previous work using cognitive modeling has associated activity in the prefrontal cortex with the retrieval demands of algebra problems and activity in the posterior parietal cortex with the transformational demands of algebra problems, but these regions tend to behave similarly in response to task manipulations (Anderson, J.R., Qin, Y., Sohn, M.-H., Stenger, V.A., Carter, C.S., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261; Qin, Y., Carter, C.S., Silk, E.M., Stenger, A., Fissell, K., Goode, A., Anderson, J.R., 2004. The change of brain activation patterns as children learn algebra equation solving. Proc. Natl. Acad. Sci. 101, 5686-5691). With this study we attempt to isolate activity in these two regions by using a multi-step algebra task in which transformation (parietal) is manipulated in the first step and retrieval (prefrontal) is manipulated in the second step. Counter to our initial predictions, both brain regions were differentially active during both steps. We designed two cognitive models, one encompassing our initial assumptions and one in which both processes were engaged during both steps. The first model provided a poor fit to the behavioral and neural data, while the second model fit both well. This simultaneously emphasizes the strong relationship between retrieval and representation in mathematical reasoning and demonstrates that cognitive modeling can serve as a useful tool for understanding task manipulations in neuroimaging experiments.
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
The art of seeing and painting.
Grossberg, Stephen
2008-01-01
The human urge to represent the three-dimensional world using two-dimensional pictorial representations dates back at least to Paleolithic times. Artists from ancient to modern times have struggled to understand how a few contours or color patches on a flat surface can induce mental representations of a three-dimensional scene. This article summarizes some of the recent breakthroughs in scientifically understanding how the brain sees that shed light on these struggles. These breakthroughs illustrate how various artists have intuitively understood paradoxical properties about how the brain sees, and have used that understanding to create great art. These paradoxical properties arise from how the brain forms the units of conscious visual perception; namely, representations of three-dimensional boundaries and surfaces. Boundaries and surfaces are computed in parallel cortical processing streams that obey computationally complementary properties. These streams interact at multiple levels to overcome their complementary weaknesses and to transform their complementary properties into consistent percepts. The article describes how properties of complementary consistency have guided the creation of many great works of art.
Menon, Nadia; White, David; Kemp, Richard I
2015-01-01
According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity-such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on "match" trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching.
[Two-nuclear neurons: sincitial fusion or amitotic division].
Sotnikov, O S; Frumkina, L E; Lactionova, A A; Paramonova, N M; Novakovskaia, S A
2011-01-01
In the review the history of research two-nuclear neurons is stated and two hypotheses about mechanisms of their formation are analysed: by sincitial fusion or amytotic divisions. The facts of discrepancy of the former orthodox cellular theory categorically denying possibility sincitial of communications in nervous system and of sincitial fusion neurons are mentioned. As an example results of ultrastructural researches of occurrence sincitium in a cortex of the big brain of rats, in autonomic ganglions, in hypocampus and a cerebellum of adult animals are presented. The video data of the sincitial fusion of live neurons and the mechanism of formation multinuclear neurons in tissue culture are analyzed. Existing data about amytotic a way of formation two-nuclear neurons are critically considered. The conclusion becomes, that the mechanism of formation two-nuclear neurons is cellular fusion. Simultaneously the review confirms our representations about existence in nervous system sincitial interneural communications.
Perisaccadic Receptive Field Expansion in the Lateral Intraparietal Area.
Wang, Xiaolan; Fung, C C Alan; Guan, Shaobo; Wu, Si; Goldberg, Michael E; Zhang, Mingsha
2016-04-20
Humans and monkeys have access to an accurate representation of visual space despite a constantly moving eye. One mechanism by which the brain accomplishes this is by remapping visual receptive fields around the time of a saccade. In this process a neuron can be excited by a probe stimulus in the current receptive field, and also simultaneously by a probe stimulus in the location that will be brought into the neuron's receptive field by the saccade (the future receptive field), even before saccade begins. Here we show that perisaccadic neuronal excitability is not limited to the current and future receptive fields but encompasses the entire region of visual space across which the current receptive field will be swept by the saccade. A computational model shows that this receptive field expansion is consistent with the propagation of a wave of activity across the cerebral cortex as saccade planning and remapping proceed. Copyright © 2016 Elsevier Inc. All rights reserved.
Sight and sound converge to form modality-invariant representations in temporo-parietal cortex
Man, Kingson; Kaplan, Jonas T.; Damasio, Antonio; Meyer, Kaspar
2013-01-01
People can identify objects in the environment with remarkable accuracy, irrespective of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content-specificity (i.e., do different objects evoke different representations?) and modality-invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content-specificity and modality-invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supra-modal (i.e., conceptual) level. PMID:23175818
Specialization in the Human Brain: The Case of Numbers
Kadosh, Roi Cohen; Bahrami, Bahador; Walsh, Vincent; Butterworth, Brian; Popescu, Tudor; Price, Cathy J.
2011-01-01
How numerical representation is encoded in the adult human brain is important for a basic understanding of human brain organization, its typical and atypical development, its evolutionary precursors, cognitive architectures, education, and rehabilitation. Previous studies have shown that numerical processing activates the same intraparietal regions irrespective of the presentation format (e.g., symbolic digits or non-symbolic dot arrays). This has led to claims that there is a single format-independent, numerical representation. In the current study we used a functional magnetic resonance adaptation paradigm, and effective connectivity analysis to re-examine whether numerical processing in the intraparietal sulci is dependent or independent on the format of the stimuli. We obtained two novel results. First, the whole brain analysis revealed that format change (e.g., from dots to digits), in the absence of a change in magnitude, activated the same intraparietal regions as magnitude change, but to a greater degree. Second, using dynamic causal modeling as a tool to disentangle neuronal specialization across regions that are commonly activated, we found that the connectivity between the left and right intraparietal sulci is format-dependent. Together, this line of results supports the idea that numerical representation is subserved by multiple mechanisms within the same parietal regions. PMID:21808615
Multidimensional brain activity dictated by winner-take-all mechanisms.
Tozzi, Arturo; Peters, James F
2018-06-21
A novel demon-based architecture is introduced to elucidate brain functions such as pattern recognition during human perception and mental interpretation of visual scenes. Starting from the topological concepts of invariance and persistence, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, demons that recognize short straight-line segments, curved lines and scene shapes, such as shape interior, density and texture. Low-level representations of objects can be mapped to higher-level views (our mental interpretations): a series of transformations can be gradually applied to a pattern in a visual scene, without affecting its invariant properties. This makes it possible to construct a symbolic multi-dimensional representation of the environment. These representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. Although perceived shapes are 3-dimensional (plus time), the evaluation of shape features (volume, color, contour, closeness, texture, and so on) leads to n-dimensional brain landscapes. Here we discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system's evolution. Copyright © 2018 Elsevier B.V. All rights reserved.
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
The timing of language learning shapes brain structure associated with articulation.
Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Klein, Denise
2016-09-01
We compared the brain structure of highly proficient simultaneous (two languages from birth) and sequential (second language after age 5) bilinguals, who differed only in their degree of native-like accent, to determine how the brain develops when a skill is acquired from birth versus later in life. For the simultaneous bilinguals, gray matter density was increased in the left putamen, as well as in the left posterior insula, right dorsolateral prefrontal cortex, and left and right occipital cortex. For the sequential bilinguals, gray matter density was increased in the bilateral premotor cortex. Sequential bilinguals with better accents also showed greater gray matter density in the left putamen, and in several additional brain regions important for sensorimotor integration and speech-motor control. Our findings suggest that second language learning results in enhanced brain structure of specific brain areas, which depends on whether two languages are learned simultaneously or sequentially, and on the extent to which native-like proficiency is acquired.
Common mechanisms of spatial attention in memory and perception: a tactile dual-task study.
Katus, Tobias; Andersen, Søren K; Müller, Matthias M
2014-03-01
Orienting attention to locations in mnemonic representations engages processes that functionally and anatomically overlap the neural circuitry guiding prospective shifts of spatial attention. The attention-based rehearsal account predicts that the requirement to withdraw attention from a memorized location impairs memory accuracy. In a dual-task study, we simultaneously presented retro-cues and pre-cues to guide spatial attention in short-term memory (STM) and perception, respectively. The spatial direction of each cue was independent of the other. The locations indicated by the combined cues could be compatible (same hand) or incompatible (opposite hands). Incompatible directional cues decreased lateralized activity in brain potentials evoked by visual cues, indicating interference in the generation of prospective attention shifts. The detection of external stimuli at the prospectively cued location was impaired when the memorized location was part of the perceptually ignored hand. The disruption of attention-based rehearsal by means of incompatible pre-cues reduced memory accuracy and affected encoding of tactile test stimuli at the retrospectively cued hand. These findings highlight the functional significance of spatial attention for spatial STM. The bidirectional interactions between both tasks demonstrate that spatial attention is a shared neural resource of a capacity-limited system that regulates information processing in internal and external stimulus representations.
Top-Down Predictions in the Cognitive Brain
ERIC Educational Resources Information Center
Kveraga, Kestutis; Ghuman, Avniel S.; Bar, Moshe
2007-01-01
The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, we propose that the brain continuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. The basic elements of this proposal include analogical mapping, associative representations and…
Beeney, Joseph E; Hallquist, Michael N; Ellison, William D; Levy, Kenneth N
2016-01-01
Individuals with borderline personality disorder (BPD) display an impoverished sense of self and representations of self and others that shift between positive and negative poles. However, little research has investigated the nature of representational disturbance in BPD. The present study takes a multimodal approach. A card sort task was used to investigate complexity, integration, and valence of self-representation in BPD. Impairment in maintenance of self and other representations was assessed using a personality representational maintenance task. Finally, functional MRI (fMRI) was used to assess whether individuals with BPD show neural abnormalities related specifically to the self and what brain areas may be related to poor representational maintenance. Individuals with BPD sorted self-aspects suggesting more complexity of self-representation, but also less integration and more negative valence overall. On the representational maintenance task, individuals with BPD showed less consistency in their representations of self and others over the 3-hr period, but only for abstract, personality-based representations. Performance on this measure mediated between-groups brain activation in several areas supporting social cognition. We found no evidence for social-cognitive disturbance specific to the self. Additionally, the BPD group showed main effects, insensitive to condition, of hyperactivation in the medial prefrontal cortex, temporal parietal junction, several regions of the frontal pole, the precuneus and middle temporal gyrus, all areas crucial social cognition. In contrast, controls evidenced greater activation in visual, sensory, motor, and mirror neuron regions. These findings are discussed in relation to research regarding hypermentalization and the overlap between self- and other-disturbance. (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Ryan, Alex
Representation is inherent to the concept of an agent, but its importance in complex systems has not yet been widely recognised. In this paper I introduce Peirce's theory of signs, which facilitates a definition of representation in general. In summary, representation means that for some agent, a model is used to stand in for another entity in a way that shapes the behaviour of the agent with respect to that entity. Representation in general is then related to the theories of representation that have developed within different disciplines. I compare theories of representation from metaphysics, military theory and systems theory. Additional complications arise in explaining the special case of mental representations, which is the focus of cognitive science. I consider the dominant theory of cognition — that the brain is a representational device — as well as the sceptical anti-representational response. Finally, I argue that representation distinguishes agents from non-representational objects: agents are objects capable of representation.
NASA Astrophysics Data System (ADS)
Nosrati, Reyhaneh; Ramadeen, Andrew; Hu, Xudong; Woldemichael, Ermias; Kim, Siwook; Dorian, Paul; Toronov, Vladislav
2015-03-01
In this series of animal experiments on resuscitation after cardiac arrest we had a unique opportunity to measure hyperspectral near-infrared spectroscopy (hNIRS) parameters directly on the brain dura, or on the brain through the intact pig skull, and simultaneously the muscle hNIRS parameters. Simultaneously the arterial blood pressure and carotid and femoral blood flow were recorded in real time using invasive sensors. We used a novel hyperspectral signalprocessing algorithm to extract time-dependent concentrations of water, hemoglobin, and redox state of cytochrome c oxidase during cardiac arrest and resuscitation. In addition in order to assess the validity of the non-invasive brain measurements the obtained results from the open brain was compared to the results acquired through the skull. The comparison of hNIRS data acquired on brain surface and through the adult pig skull shows that in both cases the hemoglobin and the redox state cytochrome c oxidase changed in similar ways in similar situations and in agreement with blood pressure and flow changes. The comparison of simultaneously measured brain and muscle changes showed expected differences. Overall the results show feasibility of transcranial hNIRS measurements cerebral parameters including the redox state of cytochrome oxidase in human cardiac arrest patients.
Dimitriadis, Stavros I; López, María E; Bruña, Ricardo; Cuesta, Pablo; Marcos, Alberto; Maestú, Fernando; Pereda, Ernesto
2018-01-01
Our work aimed to demonstrate the combination of machine learning and graph theory for the designing of a connectomic biomarker for mild cognitive impairment (MCI) subjects using eyes-closed neuromagnetic recordings. The whole analysis based on source-reconstructed neuromagnetic activity. As ROI representation, we employed the principal component analysis (PCA) and centroid approaches. As representative bi-variate connectivity estimators for the estimation of intra and cross-frequency interactions, we adopted the phase locking value (PLV), the imaginary part (iPLV) and the correlation of the envelope (CorrEnv). Both intra and cross-frequency interactions (CFC) have been estimated with the three connectivity estimators within the seven frequency bands (intra-frequency) and in pairs (CFC), correspondingly. We demonstrated how different versions of functional connectivity graphs single-layer (SL-FCG) and multi-layer (ML-FCG) can give us a different view of the functional interactions across the brain areas. Finally, we applied machine learning techniques with main scope to build a reliable connectomic biomarker by analyzing both SL-FCG and ML-FCG in two different options: as a whole unit using a tensorial extraction algorithm and as single pair-wise coupling estimations. We concluded that edge-weighed feature selection strategy outperformed the tensorial treatment of SL-FCG and ML-FCG. The highest classification performance was obtained with the centroid ROI representation and edge-weighted analysis of the SL-FCG reaching the 98% for the CorrEnv in α 1 :α 2 and 94% for the iPLV in α 2 . Classification performance based on the multi-layer participation coefficient, a multiplexity index reached 52% for iPLV and 52% for CorrEnv. Selected functional connections that build the multivariate connectomic biomarker in the edge-weighted scenario are located in default-mode, fronto-parietal, and cingulo-opercular network. Our analysis supports the notion of analyzing FCG simultaneously in intra and cross-frequency whole brain interactions with various connectivity estimators in beamformed recordings.
Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation.
Bakker-Marshall, Iske; Takashima, Atsuko; Schoffelen, Jan-Mathijs; van Hell, Janet G; Janzen, Gabriele; McQueen, James M
2018-05-01
Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286-1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
Falkner, Annegret L; Goldberg, Michael E; Krishna, B Suresh
2013-10-09
The lateral intraparietal area (LIP) in the macaque contains a priority-based representation of the visual scene. We previously showed that the mean spike rate of LIP neurons is strongly influenced by spatially wide-ranging surround suppression in a manner that effectively sharpens the priority map. Reducing response variability can also improve the precision of LIP's priority map. We show that when a monkey plans a visually guided delayed saccade with an intervening distractor, variability (measured by the Fano factor) decreases both for neurons representing the saccade goal and for neurons representing the broad spatial surround. The reduction in Fano factor is maximal for neurons representing the saccade goal and steadily decreases for neurons representing more distant locations. LIP Fano factor changes are behaviorally significant: increasing expected reward leads to lower variability for the LIP representation of both the target and distractor locations, and trials with shorter latency saccades are associated with lower Fano factors in neurons representing the surround. Thus, the LIP Fano factor reflects both stimulus and behavioral engagement. Quantitative modeling shows that the interaction between mean spike count and target-receptive field (RF) distance in the surround during the predistractor epoch is multiplicative: the Fano factor increases more steeply with mean spike count further away from the RF. A negative-binomial model for LIP spike counts captures these findings quantitatively, suggests underlying mechanisms based on trial-by-trial variations in mean spike rate or burst-firing patterns, and potentially provides a principled framework to account simultaneously for the previously observed unsystematic relationships between spike rate and variability in different brain areas.
Brain Activity During the Encoding, Retention, and Retrieval of Stimulus Representations
de Zubicaray, Greig I.; McMahon, Katie; Wilson, Stephen J.; Muthiah, Santhi
2001-01-01
Studies of delayed nonmatching-to-sample (DNMS) performance following lesions of the monkey cortex have revealed a critical circuit of brain regions involved in forming memories and retaining and retrieving stimulus representations. Using event-related functional magnetic resonance imaging (fMRI), we measured brain activity in 10 healthy human participants during performance of a trial-unique visual DNMS task using novel barcode stimuli. The event-related design enabled the identification of activity during the different phases of the task (encoding, retention, and retrieval). Several brain regions identified by monkey studies as being important for successful DNMS performance showed selective activity during the different phases, including the mediodorsal thalamic nucleus (encoding), ventrolateral prefrontal cortex (retention), and perirhinal cortex (retrieval). Regions showing sustained activity within trials included the ventromedial and dorsal prefrontal cortices and occipital cortex. The present study shows the utility of investigating performance on tasks derived from animal models to assist in the identification of brain regions involved in human recognition memory. PMID:11584070
Generating Text from Functional Brain Images
Pereira, Francisco; Detre, Greg; Botvinick, Matthew
2011-01-01
Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., “Apartment’’) while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively. PMID:21927602
Myers, C E; Gluck, M A
1996-08-01
A previous model of hippocampal region function in classical conditioning is generalized to H. Eichenbaum, A. Fagan, P. Mathews, and N.J. Cohen's (1989) and H. Eichenbaum, A. Fagan, and N.J. Cohen's (1989) simultaneous odor discrimination studies in rats. The model assumes that the hippocampal region forms new stimulus representations that compress redundant information while differentiating predictie information; the piriform (olfactory) cortex meanwhile clusters similar and co-occurring odors. Hippocampal damage interrupts the ability to differentiate odor representations, while leaving piriform-mediated odor clustering unchecked. The result is a net tendency to overcompress in the lesioned model. Behavior in the model is very similar to that of the rats, including lesion deficits, facilitation of successively learned tasks, and transfer performance. The computational mechanisms underlying model performance are consistent with the qualitative interpretations suggested by Eichen baum et al. to explain their empirical data.
A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.
Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu
2017-10-01
The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.
NASA Astrophysics Data System (ADS)
Schiff, Steven
Observability and controllability are essential concepts to the design of predictive observer models and feedback controllers of networked systems. We present a numerical and group representational framework, to quantify the observability and controllability of nonlinear networks with explicit symmetries that shows the connection between symmetries and nonlinear measures of observability and controllability. In addition to the topology of brain networks, we have advanced our ability to represent network nodes within the brain using conservation principles and more accurate biophysics that unifies the dynamics of spikes, seizures, and spreading depression. Lastly, we show how symmetries in controller design can be applied to infectious disease epidemics. NIH Grants 1R01EB014641, 1DP1HD086071.
A Tri-network Model of Human Semantic Processing
Xu, Yangwen; He, Yong; Bi, Yanchao
2017-01-01
Humans process the meaning of the world via both verbal and nonverbal modalities. It has been established that widely distributed cortical regions are involved in semantic processing, yet the global wiring pattern of this brain system has not been considered in the current neurocognitive semantic models. We review evidence from the brain-network perspective, which shows that the semantic system is topologically segregated into three brain modules. Revisiting previous region-based evidence in light of these new network findings, we postulate that these three modules support multimodal experiential representation, language-supported representation, and semantic control. A tri-network neurocognitive model of semantic processing is proposed, which generates new hypotheses regarding the network basis of different types of semantic processes. PMID:28955266
O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C
2016-11-09
Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.
Multilayer modeling and analysis of human brain networks
2017-01-01
Abstract Understanding how the human brain is structured, and how its architecture is related to function, is of paramount importance for a variety of applications, including but not limited to new ways to prevent, deal with, and cure brain diseases, such as Alzheimer’s or Parkinson’s, and psychiatric disorders, such as schizophrenia. The recent advances in structural and functional neuroimaging, together with the increasing attitude toward interdisciplinary approaches involving computer science, mathematics, and physics, are fostering interesting results from computational neuroscience that are quite often based on the analysis of complex network representation of the human brain. In recent years, this representation experienced a theoretical and computational revolution that is breaching neuroscience, allowing us to cope with the increasing complexity of the human brain across multiple scales and in multiple dimensions and to model structural and functional connectivity from new perspectives, often combined with each other. In this work, we will review the main achievements obtained from interdisciplinary research based on magnetic resonance imaging and establish de facto, the birth of multilayer network analysis and modeling of the human brain. PMID:28327916
Analyzing pitch chroma and pitch height in the human brain.
Warren, Jason D; Uppenkamp, Stefan; Patterson, Roy D; Griffiths, Timothy D
2003-11-01
The perceptual pitch dimensions of chroma and height have distinct representations in the human brain: chroma is represented in cortical areas anterior to primary auditory cortex, whereas height is represented posterior to primary auditory cortex.
Embedded sparse representation of fMRI data via group-wise dictionary optimization
NASA Astrophysics Data System (ADS)
Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.
2016-03-01
Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.
V4 activity predicts the strength of visual short-term memory representations.
Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F
2009-06-10
Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate visual store, we believe that it reflects a weak form of VSTM with high capacity that exists alongside a strong but capacity-limited form of VSTM. In the present study, we isolated brain activity related to weak and strong VSTM representations using functional magnetic resonance imaging. We found that activity in visual cortical area V4 predicted the strength of VSTM representations; activity was low when there was no VSTM, medium when there was a weak VSTM representation regardless of whether this weak representation was available for report or not, and high when there was a strong VSTM representation. Altogether, this study suggests that the high capacity yet weak VSTM store is represented in visual parts of the brain. Allegedly, only some of these VSTM traces are amplified by parietal and frontal regions and as a consequence reside in traditional or strong VSTM. The additional weak VSTM representations remain available for conscious access and report when attention is redirected to them yet are overwritten as soon as new visual stimuli hit the eyes.
Buchbinder, Mara
2015-10-01
The social work of brain images has taken center stage in recent theorizing of the intersections between neuroscience and society. However, neuroimaging is only one of the discursive modes through which public representations of neurobiology travel. This article adopts an expanded view toward the social implications of neuroscientific thinking to examine how neural imaginaries are constructed in the absence of visual evidence. Drawing on ethnographic fieldwork conducted over 18 months (2008-2009) in a United States multidisciplinary pediatric pain clinic, I examine the pragmatic clinical work undertaken to represent ambiguous symptoms in neurobiological form. Focusing on one physician, I illustrate how, by rhetorically mapping the brain as a therapeutic tool, she engaged in a distinctive form of representation that I call neural imagining. In shifting my focus away from the purely material dimensions of brain images, I juxtapose the cultural work of brain scanning technologies with clinical neural imaginaries in which the teenage brain becomes a space of possibility, not to map things as they are, but rather, things as we hope they might be. These neural imaginaries rely upon a distinctive clinical epistemology that privileges the creative work of the imagination over visualization technologies in revealing the truths of the body. By creating a therapeutic space for adolescents to exercise their imaginative faculties and a discursive template for doing so, neural imagining relocates adolescents' agency with respect to epistemologies of bodily knowledge and the role of visualization practices therein. In doing so, it provides a more hopeful alternative to the dominant popular and scientific representations of the teenage brain that view it primarily through the lens of pathology. Copyright © 2014 Elsevier Ltd. All rights reserved.
Inferring brain-computational mechanisms with models of activity measurements
Diedrichsen, Jörn
2016-01-01
High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in functional magnetic resonance imaging (fMRI) voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. To avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic representational similarity analysis (pRSA) with MMs, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects, and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognize the data-generating model in each case. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574316
Dynamic Circuitry for Updating Spatial Representations: III. From Neurons to Behavior
Berman, Rebecca A.; Heiser, Laura M.; Dunn, Catherine A.; Saunders, Richard C.; Colby, Carol L.
2008-01-01
Each time the eyes move, the visual system must adjust internal representations to account for the accompanying shift in the retinal image. In the lateral intraparietal cortex (LIP), neurons update the spatial representations of salient stimuli when the eyes move. In previous experiments, we found that split-brain monkeys were impaired on double-step saccade sequences that required updating across visual hemifields, as compared to within hemifield (Berman et al. 2005; Heiser et al. 2005). Here we describe a subsequent experiment to characterize the relationship between behavioral performance and neural activity in LIP in the split-brain monkey. We recorded from single LIP neurons while split-brain and intact monkeys performed two conditions of the double-step saccade task: one required across-hemifield updating and the other within-hemifield updating. We found that, despite extensive experience with the task, the split-brain monkeys were significantly more accurate for within-hemifield as compared to across-hemifield sequences. In parallel, we found that population activity in LIP of the split-brain monkeys was significantly stronger for within-hemifield as compared to across-hemifield conditions of the double-step task. In contrast, in the normal monkey, both the average behavioral performance and population activity showed no bias toward the within-hemifield condition. Finally, we found that the difference between within-hemifield and across-hemifield performance in the split-brain monkeys was reflected at the level of single neuron activity in LIP. These findings indicate that remapping activity in area LIP is present in the split-brain monkey for the double-step task and co-varies with spatial behavior on within-hemifield compared to across-hemifield sequences. PMID:17493922
Buchbinder, Mara
2014-01-01
The social work of brain images has taken center stage in recent theorizing of the intersections between neuroscience and society. However, neuroimaging is only one of the discursive modes through which public representations of neurobiology travel. This article adopts an expanded view toward the social implications of neuroscientific thinking to examine how neural imaginaries are constructed in the absence of visual evidence. Drawing on ethnographic fieldwork conducted over 18 months (2008–2009) in a United States multidisciplinary pediatric pain clinic, I examine the pragmatic clinical work undertaken to represent ambiguous symptoms in neurobiological form. Focusing on one physician, I illustrate how, by rhetorically mapping the brain as a therapeutic tool, she engaged in a distinctive form of representation that I call neural imagining. In shifting my focus away from the purely material dimensions of brain images, I juxtapose the cultural work of brain scanning technologies with clinical neural imaginaries in which the teenage brain becomes a space of possibility, not to map things as they are, but rather, things as we hope they might be. These neural imaginaries rely upon a distinctive clinical epistemology that privileges the creative work of the imagination over visualization technologies in revealing the truths of the body. By creating a therapeutic space for adolescents to exercise their imaginative faculties and a discursive template for doing so, neural imagining relocates adolescents’ agency with respect to epistemologies of bodily knowledge and the role of visualization practices therein. In doing so, it provides a more hopeful alternative to the dominant popular and scientific representations of the teenage brain that view it primarily through the lens of pathology. PMID:24780561
Naaz, Farah; Chariker, Julia H.; Pani, John R.
2013-01-01
A study was conducted to test the hypothesis that instruction with graphically integrated representations of whole and sectional neuroanatomy is especially effective for learning to recognize neural structures in sectional imagery (such as MRI images). Neuroanatomy was taught to two groups of participants using computer graphical models of the human brain. Both groups learned whole anatomy first with a three-dimensional model of the brain. One group then learned sectional anatomy using two-dimensional sectional representations, with the expectation that there would be transfer of learning from whole to sectional anatomy. The second group learned sectional anatomy by moving a virtual cutting plane through the three-dimensional model. In tests of long-term retention of sectional neuroanatomy, the group with graphically integrated representation recognized more neural structures that were known to be challenging to learn. This study demonstrates the use of graphical representation to facilitate a more elaborated (deeper) understanding of complex spatial relations. PMID:24563579
A Brain-wide Circuit Model of Heat-Evoked Swimming Behavior in Larval Zebrafish.
Haesemeyer, Martin; Robson, Drew N; Li, Jennifer M; Schier, Alexander F; Engert, Florian
2018-05-16
Thermosensation provides crucial information, but how temperature representation is transformed from sensation to behavior is poorly understood. Here, we report a preparation that allows control of heat delivery to zebrafish larvae while monitoring motor output and imaging whole-brain calcium signals, thereby uncovering algorithmic and computational rules that couple dynamics of heat modulation, neural activity and swimming behavior. This approach identifies a critical step in the transformation of temperature representation between the sensory trigeminal ganglia and the hindbrain: A simple sustained trigeminal stimulus representation is transformed into a representation of absolute temperature as well as temperature changes in the hindbrain that explains the observed motor output. An activity constrained dynamic circuit model captures the most prominent aspects of these sensori-motor transformations and predicts both behavior and neural activity in response to novel heat stimuli. These findings provide the first algorithmic description of heat processing from sensory input to behavioral output. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, X; Sun, T; Yin, Y
Purpose: To study the dosimetric impact of intensity-modulated radiotherapy (IMRT), hybrid intensity-modulated radiotherapy (h-IMRT) and volumetric modulated arc therapy(VMAT) for whole-brain radiotherapy (WBRT) with simultaneous integrated boost in patients with multiple brain metastases. Methods: Ten patients with multiple brain metastases were included in this analysis. The prescribed dose was 45 Gy to the whole brain (PTVWBRT) and 55 Gy to individual brain metastases (PTVboost) delivered simultaneously in 25 fractions. Three treatment techniques were designed: the 7 equal spaced fields IMRT plan, hybrid IMRT plan and VMAT with two 358°arcs. In hybrid IMRT plan, two fields(90°and 270°) were planned to themore » whole brain. This was used as a base dose plan. Then 5 fields IMRT plan was optimized based on the two fields plan. The dose distribution in the target, the dose to the organs at risk and total MU in three techniques were compared. Results: For the target dose, conformity and homogeneity in PTV, no statistically differences were observed in the three techniques. For the maximum dose in bilateral lens and the mean dose in bilateral eyes, IMRT and h-IMRT plans showed the highest and lowest value respectively. No statistically significant differences were observed in the dose of optic nerve and brainstem. For the monitor units, IMRT and VMAT plans showed the highest and lowest value respectively. Conclusion: For WBRT with simultaneous integrated boost in patients with multiple brain metastases, hybrid IMRT could reduce the doses to lens and eyes. It is feasible for patients with brain metastases.« less
Transformed Neural Pattern Reinstatement during Episodic Memory Retrieval.
Xiao, Xiaoqian; Dong, Qi; Gao, Jiahong; Men, Weiwei; Poldrack, Russell A; Xue, Gui
2017-03-15
Contemporary models of episodic memory posit that remembering involves the reenactment of encoding processes. Although encoding-retrieval similarity has been consistently reported and linked to memory success, the nature of neural pattern reinstatement is poorly understood. Using high-resolution fMRI on human subjects, our results obtained clear evidence for item-specific pattern reinstatement in the frontoparietal cortex, even when the encoding-retrieval pairs shared no perceptual similarity. No item-specific pattern reinstatement was found in the ventral visual cortex. Importantly, the brain regions and voxels carrying item-specific representation differed significantly between encoding and retrieval, and the item specificity for encoding-retrieval similarity was smaller than that for encoding or retrieval, suggesting different nature of representations between encoding and retrieval. Moreover, cross-region representational similarity analysis suggests that the encoded representation in the ventral visual cortex was reinstated in the frontoparietal cortex during retrieval. Together, these results suggest that, in addition to reinstatement of the originally encoded pattern in the brain regions that perform encoding processes, retrieval may also involve the reinstatement of a transformed representation of the encoded information. These results emphasize the constructive nature of memory retrieval that helps to serve important adaptive functions. SIGNIFICANCE STATEMENT Episodic memory enables humans to vividly reexperience past events, yet how this is achieved at the neural level is barely understood. A long-standing hypothesis posits that memory retrieval involves the faithful reinstatement of encoding-related activity. We tested this hypothesis by comparing the neural representations during encoding and retrieval. We found strong pattern reinstatement in the frontoparietal cortex, but not in the ventral visual cortex, that represents visual details. Critically, even within the same brain regions, the nature of representation during retrieval was qualitatively different from that during encoding. These results suggest that memory retrieval is not a faithful replay of past event but rather involves additional constructive processes to serve adaptive functions. Copyright © 2017 the authors 0270-6474/17/372986-13$15.00/0.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zienkiewicz, Aleksandra; Huotari, Niko; Raitamaa, Lauri; Raatikainen, Ville; Ferdinando, Hany; Vihriälä, Erkki; Korhonen, Vesa; Myllylä, Teemu; Kiviniemi, Vesa
2017-03-01
The lymph system is responsible for cleaning the tissues of metabolic waste products, soluble proteins and other harmful fluids etc. Lymph flow in the body is driven by body movements and muscle contractions. Moreover, it is indirectly dependent on the cardiovascular system, where the heart beat and blood pressure maintain force of pressure in lymphatic channels. Over the last few years, studies revealed that the brain contains the so-called glymphatic system, which is the counterpart of the systemic lymphatic system in the brain. Similarly, the flow in the glymphatic system is assumed to be mostly driven by physiological pulsations such as cardiovascular pulses. Thus, continuous measurement of blood pressure and heart function simultaneously with functional brain imaging is of great interest, particularly in studies of the glymphatic system. We present our MRI compatible optics based sensing system for continuous blood pressure measurement and show our current results on the effects of blood pressure variations on cerebral brain dynamics, with a focus on the glymphatic system. Blood pressure was measured simultaneously with near-infrared spectroscopy (NIRS) combined with an ultrafast functional brain imaging (fMRI) sequence magnetic resonance encephalography (MREG, 3D brain 10 Hz sampling rate).
Zhang, Shu; Zhao, Yu; Jiang, Xi; Shen, Dinggang; Liu, Tianming
2018-06-01
In the brain mapping field, there have been significant interests in representation of structural/functional profiles to establish structural/functional landmark correspondences across individuals and populations. For example, from the structural perspective, our previous studies have identified hundreds of consistent DICCCOL (dense individualized and common connectivity-based cortical landmarks) landmarks across individuals and populations, each of which possess consistent DTI-derived fiber connection patterns. From the functional perspective, a large collection of well-characterized HAFNI (holistic atlases of functional networks and interactions) networks based on sparse representation of whole-brain fMRI signals have been identified in our prior studies. However, due to the remarkable variability of structural and functional architectures in the human brain, it is challenging for earlier studies to jointly represent the connectome-scale structural and functional profiles for establishing a common cortical architecture which can comprehensively encode both structural and functional characteristics across individuals. To address this challenge, we propose an effective computational framework to jointly represent the structural and functional profiles for identification of consistent and common cortical landmarks with both structural and functional correspondences across different brains based on DTI and fMRI data. Experimental results demonstrate that 55 structurally and functionally common cortical landmarks can be successfully identified.
Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude
2016-01-01
Every human cognitive function, such as visual object recognition, is realized in a complex spatio-temporal activity pattern in the brain. Current brain imaging techniques in isolation cannot resolve the brain's spatio-temporal dynamics, because they provide either high spatial or temporal resolution but not both. To overcome this limitation, we developed an integration approach that uses representational similarities to combine measurements of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to yield a spatially and temporally integrated characterization of neuronal activation. Applying this approach to 2 independent MEG–fMRI data sets, we observed that neural activity first emerged in the occipital pole at 50–80 ms, before spreading rapidly and progressively in the anterior direction along the ventral and dorsal visual streams. Further region-of-interest analyses established that dorsal and ventral regions showed MEG–fMRI correspondence in representations later than early visual cortex. Together, these results provide a novel and comprehensive, spatio-temporally resolved view of the rapid neural dynamics during the first few hundred milliseconds of object vision. They further demonstrate the feasibility of spatially unbiased representational similarity-based fusion of MEG and fMRI, promising new insights into how the brain computes complex cognitive functions. PMID:27235099
Glezer, Laurie S; Kim, Judy; Rule, Josh; Jiang, Xiong; Riesenhuber, Maximilian
2015-03-25
The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary. Copyright © 2015 the authors 0270-6474/15/354965-08$15.00/0.
Ren, Yudan; Fang, Jun; Lv, Jinglei; Hu, Xintao; Guo, Cong Christine; Guo, Lei; Xu, Jiansong; Potenza, Marc N; Liu, Tianming
2017-08-01
Assessing functional brain activation patterns in neuropsychiatric disorders such as cocaine dependence (CD) or pathological gambling (PG) under naturalistic stimuli has received rising interest in recent years. In this paper, we propose and apply a novel group-wise sparse representation framework to assess differences in neural responses to naturalistic stimuli across multiple groups of participants (healthy control, cocaine dependence, pathological gambling). Specifically, natural stimulus fMRI (N-fMRI) signals from all three groups of subjects are aggregated into a big data matrix, which is then decomposed into a common signal basis dictionary and associated weight coefficient matrices via an effective online dictionary learning and sparse coding method. The coefficient matrices associated with each common dictionary atom are statistically assessed for each group separately. With the inter-group comparisons based on the group-wise correspondence established by the common dictionary, our experimental results demonstrated that the group-wise sparse coding and representation strategy can effectively and specifically detect brain networks/regions affected by different pathological conditions of the brain under naturalistic stimuli.
ERIC Educational Resources Information Center
Abrahamson, Dor
2006-01-01
This snapshot introduces a computer-based representation and activity that enables students to simultaneously "see" the combinatorial space of a stochastic device (e.g., dice, spinner, coins) and its outcome distribution. The author argues that the "ambiguous" representation fosters student insight into probability. [Snapshots are subject to peer…
ERIC Educational Resources Information Center
Van Lancker Sidtis, Diana
2007-01-01
Neurolinguistic research has been engaged in evaluating models of language using measures from brain structure and function, and/or in investigating brain structure and function with respect to language representation using proposed models of language. While the aphasiological strategy, which classifies aphasias based on performance modality and a…
ERIC Educational Resources Information Center
Margherio, Cara; Horner-Devine, M. Claire; Mizumori, Sheri J. Y.; Yen, Joyce W.
2016-01-01
BRAINS: Broadening the Representation of Academic Investigators in NeuroScience is a National Institutes of Health-funded, national program that addresses challenges to the persistence of diverse early-career neuroscientists. In doing so, BRAINS aims to advance diversity in neuroscience by increasing career advancement and retention of post-PhD,…
Chemical reactivation of resin-embedded pHuji adds red for simultaneous two-color imaging with EGFP
Guo, Wenyan; Liu, Xiuli; Liu, Yurong; Gang, Yadong; He, Xiaobin; Jia, Yao; Yin, Fangfang; Li, Pei; Huang, Fei; Zhou, Hongfu; Wang, Xiaojun; Gong, Hui; Luo, Qingming; Xu, Fuqiang; Zeng, Shaoqun
2017-01-01
The pH-sensitive fluorescent proteins enabling chemical reactivation in resin are useful tools for fluorescence microimaging. EGFP or EYFP is good for such applications. For simultaneous two-color imaging, a suitable red fluorescent protein is an urgent need. Here a pH-sensitive red fluorescent protein, pHuji, is selected and verified to remain pH-sensitive in HM20 resin. We observe 183% fluorescence intensity of pHuji in resin-embeded mouse brain and 29.08-fold fluorescence intensity of reactivated pHuji compared to the quenched state. pHuji and EGFP can be quenched and chemically reactivated simultaneously in resin, thus enabling simultaneous two-color micro-optical sectioning tomography of resin-embedded mouse brain. This method may greatly facilitate the visualization of neuronal morphology and neural circuits to promote understanding of the structure and function of the brain. PMID:28717566
Chemical reactivation of resin-embedded pHuji adds red for simultaneous two-color imaging with EGFP.
Guo, Wenyan; Liu, Xiuli; Liu, Yurong; Gang, Yadong; He, Xiaobin; Jia, Yao; Yin, Fangfang; Li, Pei; Huang, Fei; Zhou, Hongfu; Wang, Xiaojun; Gong, Hui; Luo, Qingming; Xu, Fuqiang; Zeng, Shaoqun
2017-07-01
The pH-sensitive fluorescent proteins enabling chemical reactivation in resin are useful tools for fluorescence microimaging. EGFP or EYFP is good for such applications. For simultaneous two-color imaging, a suitable red fluorescent protein is an urgent need. Here a pH-sensitive red fluorescent protein, pHuji, is selected and verified to remain pH-sensitive in HM20 resin. We observe 183% fluorescence intensity of pHuji in resin-embeded mouse brain and 29.08-fold fluorescence intensity of reactivated pHuji compared to the quenched state. pHuji and EGFP can be quenched and chemically reactivated simultaneously in resin, thus enabling simultaneous two-color micro-optical sectioning tomography of resin-embedded mouse brain. This method may greatly facilitate the visualization of neuronal morphology and neural circuits to promote understanding of the structure and function of the brain.
Decoding representations of face identity that are tolerant to rotation.
Anzellotti, Stefano; Fairhall, Scott L; Caramazza, Alfonso
2014-08-01
In order to recognize the identity of a face we need to distinguish very similar images (specificity) while also generalizing identity information across image transformations such as changes in orientation (tolerance). Recent studies investigated the representation of individual faces in the brain, but it remains unclear whether the human brain regions that were found encode representations of individual images (specificity) or face identity (specificity plus tolerance). In the present article, we use multivoxel pattern analysis in the human ventral stream to investigate the representation of face identity across rotations in depth, a kind of transformation in which no point in the face image remains unchanged. The results reveal representations of face identity that are tolerant to rotations in depth in occipitotemporal cortex and in anterior temporal cortex, even when the similarity between mirror symmetrical views cannot be used to achieve tolerance. Converging evidence from different analysis techniques shows that the right anterior temporal lobe encodes a comparable amount of identity information to occipitotemporal regions, but this information is encoded over a smaller extent of cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615
How we may think: Imaging and writing technologies across the history of the neurosciences.
Borck, Cornelius
2016-06-01
In the neurosciences, two alternative regimes of visualization can be differentiated: anatomical preparations for morphological images and physiological studies for functional representations. Adapting a distinction proposed by Peter Galison, this duality of visualization regimes is analyzed here as the contrast between an imaging and a writing approach: the imaging approach, focusing on mimetic representations, preserving material and spatial relations, and the writing approach as used in physiological studies, retaining functional relations. After a dominance of morphological images gathering iconic representations of brains and architectural brain theories, the advent of electroencephalography advanced writing approaches with their indexical signs. Addressing the brain allegedly at its mode of operation, electroencephalography was conceived as recording the brain's intrinsic language, extending the writing approach to include symbolic signs. The availability of functional neuroimaging signaled an opportunity to overcome the duality of imaging and writing, but revived initially a phrenological conflation of form and function, suppressing the writing approach in relation to imaging. More sophisticated visualization modes, however, converted this reductionism to the ontological productivity of social neuroscience and recuperated the theorizing from the writing approach. In light of the ongoing instrumental mediations between brains, data and theories, the question of how we may think, once proposed by Vannevar Bush as a prospect of enhanced human-machine interaction, has become the state of affairs in the entanglements of instruments and organic worlds. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mather, Mara; Clewett, David; Sakaki, Michiko; Harley, Carolyn W.
2018-01-01
Long Abstract Existing brain-based emotion-cognition theories fail to explain arousal’s ability to both enhance and impair cognitive processing. In the Glutamate Amplifies Noradrenergic Effects (GANE) model outlined in this paper, we propose that arousal-induced norepinephrine (NE) released from the locus coeruleus (LC) biases perception and memory in favor of salient, high priority representations at the expense of lower priority representations. This increase in gain under phasic arousal occurs via synaptic self-regulation of NE based on glutamate levels. When the LC is phasically active, elevated levels of glutamate at the site of prioritized representations increase local NE release, creating “NE hot spots.” At these local hot spots, glutamate and NE release are mutually enhancing and amplify activation of prioritized representations. This excitatory effect contrasts with widespread NE suppression of weaker representations via lateral and auto-inhibitory processes. On a broader scale, hot spots increase oscillatory synchronization across neural ensembles transmitting high priority information. Furthermore, key brain structures that detect or pre-determine stimulus priority interact with phasic NE release to preferentially route such information through large-scale functional brain networks. A surge of NE before, during or after encoding enhances synaptic plasticity at sites of high glutamate activity, triggering local protein synthesis processes that enhance selective memory consolidation. Together, these noradrenergic mechanisms increase perceptual and memory selectivity under arousal. Beyond explaining discrepancies in the emotion-cognition literature, GANE reconciles and extends previous influential theories of LC neuromodulation by highlighting how NE can produce such different outcomes in processing based on priority. PMID:26126507
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
Woolgar, Alexandra; Williams, Mark A; Rich, Anina N
2015-04-01
Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
Maplike representation of celestial E-vector orientations in the brain of an insect.
Heinze, Stanley; Homberg, Uwe
2007-02-16
For many insects, the polarization pattern of the blue sky serves as a compass cue for spatial navigation. E-vector orientations are detected by photoreceptors in a dorsal rim area of the eye. Polarized-light signals from both eyes are finally integrated in the central complex, a brain area consisting of two subunits, the protocerebral bridge and the central body. Here we show that a topographic representation of zenithal E-vector orientations underlies the columnar organization of the protocerebral bridge in a locust. The maplike arrangement is highly suited to signal head orientation under the open sky.
Combaz, Adrien; Van Hulle, Marc M
2015-01-01
We study the feasibility of a hybrid Brain-Computer Interface (BCI) combining simultaneous visual oddball and Steady-State Visually Evoked Potential (SSVEP) paradigms, where both types of stimuli are superimposed on a computer screen. Potentially, such a combination could result in a system being able to operate faster than a purely P300-based BCI and encode more targets than a purely SSVEP-based BCI. We analyse the interactions between the brain responses of the two paradigms, and assess the possibility to detect simultaneously the brain activity evoked by both paradigms, in a series of 3 experiments where EEG data are analysed offline. Despite differences in the shape of the P300 response between pure oddball and hybrid condition, we observe that the classification accuracy of this P300 response is not affected by the SSVEP stimulation. We do not observe either any effect of the oddball stimulation on the power of the SSVEP response in the frequency of stimulation. Finally results from the last experiment show the possibility of detecting both types of brain responses simultaneously and suggest not only the feasibility of such hybrid BCI but also a gain over pure oddball- and pure SSVEP-based BCIs in terms of communication rate.
Santos, Lucas; Opris, Ioan; Fuqua, Joshua; Hampson, Robert E; Deadwyler, Sam A
2012-04-15
A unique custom-made tetrode microdrive for recording from large numbers of neurons in several areas of primate brain is described as a means for assessing simultaneous neural activity in cortical and subcortical structures in nonhuman primates (NHPs) performing behavioral tasks. The microdrive device utilizes tetrode technology with up to six ultra-thin microprobe guide tubes (0.1mm) that can be independently positioned, each containing reduced diameter tetrode and/or hexatrode microwires (0.02 mm) for recording and isolating single neuron activity. The microdrive device is mounted within the standard NHP cranial well and allows traversal of brain depths up to 40.0 mm. The advantages of this technology are demonstrated via simultaneously recorded large populations of neurons with tetrode type probes during task performance from a) primary motor cortex and deep brain structures (caudate-putamen and hippocampus) and b) multiple layers within the prefrontal cortex. The means to characterize interactions of well-isolated ensembles of neurons recorded simultaneously from different regions, as shown with this device, has not been previously available for application in primate brain. The device has extensive application to primate models for the detection and study of inoperative or maladaptive neural circuits related to human neurological disorders. Published by Elsevier B.V.
Measuring Sparseness in the Brain: Comment on Bowers (2009)
ERIC Educational Resources Information Center
Quian Quiroga, Rodrigo; Kreiman, Gabriel
2010-01-01
Bowers challenged the common view in favor of distributed representations in psychological modeling and the main arguments given against localist and grandmother cell coding schemes. He revisited the results of several single-cell studies, arguing that they do not support distributed representations. We praise the contribution of Bowers (2009) for…
Semantics vs. World Knowledge in Prefrontal Cortex
ERIC Educational Resources Information Center
Pylkkanen, Liina; Oliveri, Bridget; Smart, Andrew J.
2009-01-01
Humans have knowledge about the properties of their native language at various levels of representation; sound, structure, and meaning computation constitute the core components of any linguistic theory. Although the brain sciences have engaged with representational theories of sound and syntactic structure, the study of the neural bases of…
Qiao, Lei; Zhang, Lijie
2017-01-01
Cognitive flexibility forms the core of the extraordinary ability of humans to adapt, but the precise neural mechanisms underlying our ability to nimbly shift between task sets remain poorly understood. Recent functional magnetic resonance imaging (fMRI) studies employing multivoxel pattern analysis (MVPA) have shown that a currently relevant task set can be decoded from activity patterns in the frontoparietal cortex, but whether these regions support the dynamic transformation of task sets from trial to trial is not clear. Here, we combined a cued task-switching protocol with human (both sexes) fMRI, and harnessed representational similarity analysis (RSA) to facilitate a novel assessment of trial-by-trial changes in neural task-set representations. We first used MVPA to define task-sensitive frontoparietal and visual regions and found that neural task-set representations on switch trials are less stably encoded than on repeat trials. We then exploited RSA to show that the neural representational pattern dissimilarity across consecutive trials is greater for switch trials than for repeat trials, and that the degree of this pattern dissimilarity predicts behavior. Moreover, the overall neural pattern of representational dissimilarities followed from the assumption that repeating sets, compared with switching sets, results in stronger neural task representations. Finally, when moving from cue to target phase within a trial, pattern dissimilarities tracked the transformation from previous-trial task representations to the currently relevant set. These results provide neural evidence for the longstanding assumptions of an effortful task-set reconfiguration process hampered by task-set inertia, and they demonstrate that frontoparietal and stimulus processing regions support “dynamic adaptive coding,” flexibly representing changing task sets in a trial-by-trial fashion. SIGNIFICANCE STATEMENT Humans can fluently switch between different tasks, reflecting an ability to dynamically configure “task sets,” rule representations that link stimuli to appropriate responses. Recent studies show that neural signals in frontal and parietal brain regions can tell us which of two tasks a person is currently performing. However, it is not known whether these regions are also involved in dynamically reconfiguring task-set representations when switching between tasks. Here we measured human brain activity during task switching and tracked the similarity of neural task-set representations from trial to trial. We show that frontal and parietal brain regions flexibly recode changing task sets in a trial-by-trial fashion, and that task-set similarity over consecutive trials predicts behavior. PMID:28972126
ERIC Educational Resources Information Center
Emmorey, Karen; Petrich, Jennifer A. F.; Gollan, Tamar H.
2012-01-01
Bilinguals who are fluent in American Sign Language (ASL) and English often produce "code-blends"--simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization…
Decoding word and category-specific spatiotemporal representations from MEG and EEG
Chan, Alexander M.; Halgren, Eric; Marinkovic, Ksenija; Cash, Sydney S.
2010-01-01
The organization and localization of lexico-semantic information in the brain has been debated for many years. Specifically, lesion and imaging studies have attempted to map the brain areas representing living versus non-living objects, however, results remain variable. This may be due, in part, to the fact that the univariate statistical mapping analyses used to detect these brain areas are typically insensitive to subtle, but widespread, effects. Decoding techniques, on the other hand, allow for a powerful multivariate analysis of multichannel neural data. In this study, we utilize machine-learning algorithms to first demonstrate that semantic category, as well as individual words, can be decoded from EEG and MEG recordings of subjects performing a language task. Mean accuracies of 76% (chance = 50%) and 83% (chance = 20%) were obtained for the decoding of living vs. non-living category or individual words respectively. Furthermore, we utilize this decoding analysis to demonstrate that the representations of words and semantic category are highly distributed both spatially and temporally. In particular, bilateral anterior temporal, bilateral inferior frontal, and left inferior temporal-occipital sensors are most important for discrimination. Successful intersubject and intermodality decoding shows that semantic representations between stimulus modalities and individuals are reasonably consistent. These results suggest that both word and category-specific information are present in extracranially recorded neural activity and that these representations may be more distributed, both spatially and temporally, than previous studies suggest. PMID:21040796
Localizing Pain Matrix and Theory of Mind networks with both verbal and non-verbal stimuli.
Jacoby, Nir; Bruneau, Emile; Koster-Hale, Jorie; Saxe, Rebecca
2016-02-01
Functional localizer tasks allow researchers to identify brain regions in each individual's brain, using a combination of anatomical and functional constraints. In this study, we compare three social cognitive localizer tasks, designed to efficiently identify regions in the "Pain Matrix," recruited in response to a person's physical pain, and the "Theory of Mind network," recruited in response to a person's mental states (i.e. beliefs and emotions). Participants performed three tasks: first, the verbal false-belief stories task; second, a verbal task including stories describing physical pain versus emotional suffering; and third, passively viewing a non-verbal animated movie, which included segments depicting physical pain and beliefs and emotions. All three localizers were efficient in identifying replicable, stable networks in individual subjects. The consistency across tasks makes all three tasks viable localizers. Nevertheless, there were small reliable differences in the location of the regions and the pattern of activity within regions, hinting at more specific representations. The new localizers go beyond those currently available: first, they simultaneously identify two functional networks with no additional scan time, and second, the non-verbal task extends the populations in whom functional localizers can be applied. These localizers will be made publicly available. Copyright © 2015 Elsevier Inc. All rights reserved.
Bache, Cathleen; Springer, Anne; Noack, Hannes; Stadler, Waltraud; Kopp, Franziska; Lindenberger, Ulman; Werkle-Bergner, Markus
2017-01-01
Research has shown that infants are able to track a moving target efficiently - even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action's timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants' current motor repertoire.
Bache, Cathleen; Springer, Anne; Noack, Hannes; Stadler, Waltraud; Kopp, Franziska; Lindenberger, Ulman; Werkle-Bergner, Markus
2017-01-01
Research has shown that infants are able to track a moving target efficiently – even if it is transiently occluded from sight. This basic ability allows prediction of when and where events happen in everyday life. Yet, it is unclear whether, and how, infants internally represent the time course of ongoing movements to derive predictions. In this study, 10-month-old crawlers observed the video of a same-aged crawling baby that was transiently occluded and reappeared in either a temporally continuous or non-continuous manner (i.e., delayed by 500 ms vs. forwarded by 500 ms relative to the real-time movement). Eye movement and rhythmic neural brain activity (EEG) were measured simultaneously. Eye movement analyses showed that infants were sensitive to slight temporal shifts in movement continuation after occlusion. Furthermore, brain activity associated with sensorimotor processing differed between observation of continuous and non-continuous movements. Early sensitivity to an action’s timing may hence be explained within the internal real-time simulation account of action observation. Overall, the results support the hypothesis that 10-month-old infants are well prepared for internal representation of the time course of observed movements that are within the infants’ current motor repertoire. PMID:28769831
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.
Ng, Kian B; Bradley, Andrew P; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface
NASA Astrophysics Data System (ADS)
Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross
2012-06-01
The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.
2016-01-01
Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541
Decoding Articulatory Features from fMRI Responses in Dorsal Speech Regions.
Correia, Joao M; Jansma, Bernadette M B; Bonte, Milene
2015-11-11
The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception. Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception. Copyright © 2015 the authors 0270-6474/15/3515015-11$15.00/0.
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain. PMID:27057543
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.
ERIC Educational Resources Information Center
McGee, Daniel; Moore-Russo, Deborah
2015-01-01
A test project at the University of Puerto Rico in Mayagüez used GeoGebra applets to promote the concept of multirepresentational fluency among high school mathematics preservice teachers. For this study, this fluency was defined as simultaneous awareness of all representations associated with a mathematical concept, as measured by the ability to…
Wang, Fang; Han, Yong; Wang, Bingyu; Peng, Qian; Huang, Xiaoqun; Miller, Karol; Wittek, Adam
2018-05-12
In this study, we investigate the effects of modelling choices for the brain-skull interface (layers of tissues between the brain and skull that determine boundary conditions for the brain) and the constitutive model of brain parenchyma on the brain responses under violent impact as predicted using computational biomechanics model. We used the head/brain model from Total HUman Model for Safety (THUMS)-extensively validated finite element model of the human body that has been applied in numerous injury biomechanics studies. The computations were conducted using a well-established nonlinear explicit dynamics finite element code LS-DYNA. We employed four approaches for modelling the brain-skull interface and four constitutive models for the brain tissue in the numerical simulations of the experiments on post-mortem human subjects exposed to violent impacts reported in the literature. The brain-skull interface models included direct representation of the brain meninges and cerebrospinal fluid, outer brain surface rigidly attached to the skull, frictionless sliding contact between the brain and skull, and a layer of spring-type cohesive elements between the brain and skull. We considered Ogden hyperviscoelastic, Mooney-Rivlin hyperviscoelastic, neo-Hookean hyperviscoelastic and linear viscoelastic constitutive models of the brain tissue. Our study indicates that the predicted deformations within the brain and related brain injury criteria are strongly affected by both the approach of modelling the brain-skull interface and the constitutive model of the brain parenchyma tissues. The results suggest that accurate prediction of deformations within the brain and risk of brain injury due to violent impact using computational biomechanics models may require representation of the meninges and subarachnoidal space with cerebrospinal fluid in the model and application of hyperviscoelastic (preferably Ogden-type) constitutive model for the brain tissue.
Prefrontal Cortex Networks Shift from External to Internal Modes during Learning.
Brincat, Scott L; Miller, Earl K
2016-09-14
As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with "internal" memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)-regions critical for sensory associations-of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11-27 Hz) oscillatory power and synchrony associated with "top-down" or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired "top-down" knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. Copyright © 2016 the authors 0270-6474/16/369739-16$15.00/0.
Feedback Synthesizes Neural Codes for Motion.
Clarke, Stephen E; Maler, Leonard
2017-05-08
In senses as diverse as vision, hearing, touch, and the electrosense, sensory neurons receive bottom-up input from the environment, as well as top-down input from feedback loops involving higher brain regions [1-4]. Through connectivity with local inhibitory interneurons, these feedback loops can exert both positive and negative control over fundamental aspects of neural coding, including bursting [5, 6] and synchronous population activity [7, 8]. Here we show that a prominent midbrain feedback loop synthesizes a neural code for motion reversal in the hindbrain electrosensory ON- and OFF-type pyramidal cells. This top-down mechanism generates an accurate bidirectional encoding of object position, despite the inability of the electrosensory afferents to generate a consistent bottom-up representation [9, 10]. The net positive activity of this midbrain feedback is additionally regulated through a hindbrain feedback loop, which reduces stimulus-induced bursting and also dampens the ON and OFF cell responses to interfering sensory input [11]. We demonstrate that synthesis of motion representations and cancellation of distracting signals are mediated simultaneously by feedback, satisfying an accepted definition of spatial attention [12]. The balance of excitatory and inhibitory feedback establishes a "focal" distance for optimized neural coding, whose connection to a classic motion-tracking behavior provides new insight into the computational roles of feedback and active dendrites in spatial localization [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prefrontal Cortex Networks Shift from External to Internal Modes during Learning
Brincat, Scott L.
2016-01-01
As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with “internal” memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)—regions critical for sensory associations—of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11–27 Hz) oscillatory power and synchrony associated with “top-down” or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. SIGNIFICANCE STATEMENT As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired “top-down” knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. PMID:27629722
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
Vogel, Stephan E; Goffin, Celia; Ansari, Daniel
2015-04-01
The way the human brain constructs representations of numerical symbols is poorly understood. While increasing evidence from neuroimaging studies has indicated that the intraparietal sulcus (IPS) becomes increasingly specialized for symbolic numerical magnitude representation over developmental time, the extent to which these changes are associated with age-related differences in symbolic numerical magnitude representation or with developmental changes in non-numerical processes, such as response selection, remains to be uncovered. To address these outstanding questions we investigated developmental changes in the cortical representation of symbolic numerical magnitude in 6- to 14-year-old children using a passive functional magnetic resonance imaging adaptation design, thereby mitigating the influence of response selection. A single-digit Arabic numeral was repeatedly presented on a computer screen and interspersed with the presentation of novel digits deviating as a function of numerical ratio (smaller/larger number). Results demonstrated a correlation between age and numerical ratio in the left IPS, suggesting an age-related increase in the extent to which numerical symbols are represented in the left IPS. Brain activation of the right IPS was modulated by numerical ratio but did not correlate with age, indicating hemispheric differences in IPS engagement during the development of symbolic numerical representation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang
2017-01-01
Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l1-norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a “connectivity strength-weighted sparse group constraint.” In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. PMID:28150897
Sex Differences in the Spatial Representation of Number
ERIC Educational Resources Information Center
Bull, Rebecca; Cleland, Alexandra A.; Mitchell, Thomas
2013-01-01
There is a large body of accumulated evidence from behavioral and neuroimaging studies regarding how and where in the brain we represent basic numerical information. A number of these studies have considered how numerical representations may differ between individuals according to their age or level of mathematical ability, but one issue rarely…
A 2.5-D Representation of the Human Hand
ERIC Educational Resources Information Center
Longo, Matthew R.; Haggard, Patrick
2012-01-01
Primary somatosensory maps in the brain represent the body as a discontinuous, fragmented set of two-dimensional (2-D) skin regions. We nevertheless experience our body as a coherent three-dimensional (3-D) volumetric object. The links between these different aspects of body representation, however, remain poorly understood. Perceiving the body's…
ERIC Educational Resources Information Center
Wood, Justin N.; Wood, Samantha M. W.
2018-01-01
How do newborns learn to recognize objects? According to temporal learning models in computational neuroscience, the brain constructs object representations by extracting smoothly changing features from the environment. To date, however, it is unknown whether newborns depend on smoothly changing features to build invariant object representations.…
ERIC Educational Resources Information Center
Plaut, David C.; McClelland, James L.
2010-01-01
According to Bowers, the finding that there are neurons with highly selective responses to familiar stimuli supports theories positing localist representations over approaches positing the type of distributed representations typically found in parallel distributed processing (PDP) models. However, his conclusions derive from an overly narrow view…
Ichi, Ni, 3, 4: Neural Representation of Kana, Kanji, and Arabic Numbers in Native Japanese Speakers
ERIC Educational Resources Information Center
Coderre, Emily L.; Filippi, Christopher G.; Newhouse, Paul A.; Dumas, Julie A.
2009-01-01
The Japanese language represents numbers in kana digit words (a syllabic notation), kanji numbers and Arabic numbers (logographic notations). Kanji and Arabic numbers have previously shown similar patterns of numerical processing, and because of their shared logographic properties may exhibit similar brain areas of numerical representation. Kana…
The Nature of Experience Determines Object Representations in the Visual System
ERIC Educational Resources Information Center
Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel
2012-01-01
Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…
Convergence, Degeneracy, and Control
ERIC Educational Resources Information Center
Green, David W.; Crinion, Jenny; Price, Cathy J.
2006-01-01
Understanding the neural representation and control of language in normal bilingual speakers provides insights into the factors that constrain the acquisition of another language, insights into the nature of language expertise, and an understanding of the brain as an adaptive system. We illustrate both functional and structural brain changes…
Leyh, Rainer; Heinisch, Christine; Behringer, Johanna; Reiner, Iris; Spangler, Gottfried
2016-01-01
The perception of infant emotions is an integral part of sensitive caregiving within the mother-child relationship, a maternal ability which develops in mothers during their own attachment history. In this study we address the association between maternal attachment representation and brain activity underlying the perception of infant emotions. Event related potentials (ERPs) of 32 primiparous mothers were assessed during a three stimulus oddball task presenting negative, positive and neutral emotion expressions of infants as target, deviant or standard stimuli. Attachment representation was assessed with the Adult Attachment Interview during pregnancy. Securely attached mothers recognized emotions of infants more accurately than insecurely attached mothers. ERPs yielded amplified N170 amplitudes for insecure mothers when focusing on negative infant emotions. Secure mothers showed enlarged P3 amplitudes to target emotion expressions of infants compared to insecure mothers, especially within conditions with frequent negative infant emotions. In these conditions, P3 latencies were prolonged in insecure mothers. In summary, maternal attachment representation was found associated with brain activity during the perception of infant emotions. This further clarifies psychological mechanisms contributing to maternal sensitivity. PMID:26862743
Qualitatively different coding of symbolic and nonsymbolic numbers in the human brain.
Lyons, Ian M; Ansari, Daniel; Beilock, Sian L
2015-02-01
Are symbolic and nonsymbolic numbers coded differently in the brain? Neuronal data indicate that overlap in numerical tuning curves is a hallmark of the approximate, analogue nature of nonsymbolic number representation. Consequently, patterns of fMRI activity should be more correlated when the representational overlap between two numbers is relatively high. In bilateral intraparietal sulci (IPS), for nonsymbolic numbers, the pattern of voxelwise correlations between pairs of numbers mirrored the amount of overlap in their tuning curves under the assumption of approximate, analogue coding. In contrast, symbolic numbers showed a flat field of modest correlations more consistent with discrete, categorical representation (no systematic overlap between numbers). Directly correlating activity patterns for a given number across formats (e.g., the numeral "6" with six dots) showed no evidence of shared symbolic and nonsymbolic number-specific representations. Overall (univariate) activity in bilateral IPS was well fit by the log of the number being processed for both nonsymbolic and symbolic numbers. IPS activity is thus sensitive to numerosity regardless of format; however, the nature in which symbolic and nonsymbolic numbers are encoded is fundamentally different. © 2014 Wiley Periodicals, Inc.
Common Neural Representations for Visually Guided Reorientation and Spatial Imagery
Vass, Lindsay K.; Epstein, Russell A.
2017-01-01
Abstract Spatial knowledge about an environment can be cued from memory by perception of a visual scene during active navigation or by imagination of the relationships between nonvisible landmarks, such as when providing directions. It is not known whether these different ways of accessing spatial knowledge elicit the same representations in the brain. To address this issue, we scanned participants with fMRI, while they performed a judgment of relative direction (JRD) task that required them to retrieve real-world spatial relationships in response to either pictorial or verbal cues. Multivoxel pattern analyses revealed several brain regions that exhibited representations that were independent of the cues to access spatial memory. Specifically, entorhinal cortex in the medial temporal lobe and the retrosplenial complex (RSC) in the medial parietal lobe coded for the heading assumed on a particular trial, whereas the parahippocampal place area (PPA) contained information about the starting location of the JRD. These results demonstrate the existence of spatial representations in RSC, ERC, and PPA that are common to visually guided navigation and spatial imagery. PMID:26759482
Leyh, Rainer; Heinisch, Christine; Behringer, Johanna; Reiner, Iris; Spangler, Gottfried
2016-01-01
The perception of infant emotions is an integral part of sensitive caregiving within the mother-child relationship, a maternal ability which develops in mothers during their own attachment history. In this study we address the association between maternal attachment representation and brain activity underlying the perception of infant emotions. Event related potentials (ERPs) of 32 primiparous mothers were assessed during a three stimulus oddball task presenting negative, positive and neutral emotion expressions of infants as target, deviant or standard stimuli. Attachment representation was assessed with the Adult Attachment Interview during pregnancy. Securely attached mothers recognized emotions of infants more accurately than insecurely attached mothers. ERPs yielded amplified N170 amplitudes for insecure mothers when focusing on negative infant emotions. Secure mothers showed enlarged P3 amplitudes to target emotion expressions of infants compared to insecure mothers, especially within conditions with frequent negative infant emotions. In these conditions, P3 latencies were prolonged in insecure mothers. In summary, maternal attachment representation was found associated with brain activity during the perception of infant emotions. This further clarifies psychological mechanisms contributing to maternal sensitivity.
Rosenthal, Gideon; Váša, František; Griffa, Alessandra; Hagmann, Patric; Amico, Enrico; Goñi, Joaquín; Avidan, Galia; Sporns, Olaf
2018-06-05
Connectomics generates comprehensive maps of brain networks, represented as nodes and their pairwise connections. The functional roles of nodes are defined by their direct and indirect connectivity with the rest of the network. However, the network context is not directly accessible at the level of individual nodes. Similar problems in language processing have been addressed with algorithms such as word2vec that create embeddings of words and their relations in a meaningful low-dimensional vector space. Here we apply this approach to create embedded vector representations of brain networks or connectome embeddings (CE). CE can characterize correspondence relations among brain regions, and can be used to infer links that are lacking from the original structural diffusion imaging, e.g., inter-hemispheric homotopic connections. Moreover, we construct predictive deep models of functional and structural connectivity, and simulate network-wide lesion effects using the face processing system as our application domain. We suggest that CE offers a novel approach to revealing relations between connectome structure and function.
Distributed representations in memory: Insights from functional brain imaging
Rissman, Jesse; Wagner, Anthony D.
2015-01-01
Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly leveraged powerful analytical tools (e.g., multi-voxel pattern analysis) to decode the information represented within distributed fMRI activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content, and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses. PMID:21943171
Eccles, J A; Garfinkel, S N; Harrison, N A; Ward, J; Taylor, R E; Bewley, A P; Critchley, H D
2015-10-01
Some patients experience skin sensations of infestation and contamination that are elusive to proximate dermatological explanation. We undertook a functional magnetic resonance imaging study of the brain to demonstrate, for the first time, that central processing of infestation-relevant stimuli is altered in patients with such abnormal skin sensations. We show differences in neural activity within amygdala, insula, middle temporal lobe and frontal cortices. Patients also demonstrated altered measures of self-representation, with poorer sensitivity to internal bodily (interoceptive) signals and greater susceptibility to take on an illusion of body ownership: the rubber hand illusion. Together, these findings highlight a potential model for the maintenance of abnormal skin sensations, encompassing heightened threat processing within amygdala, increased salience of skin representations within insula and compromised prefrontal capacity for self-regulation and appraisal. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Aparicio, Xavier; Heidlmayr, Karin; Isel, Frédéric
2017-01-01
The present behavioral study aimed to examine the impact of language control expertise on two domain-general control processes, i.e. active inhibition of competing representations and overcoming of inhibition. We compared how Simultaneous Interpreters (SI) and Highly Proficient Bilinguals--two groups assumed to differ in language control…
Pananceau, M; Rispal-Padel, L
2000-06-01
In classic conditioning, the interstimulus interval (ISI) between the conditioned (CS) and unconditioned (US) stimulus is a critical parameter. The aim of the present experiment was to assess whether, during conditioning, modification of the CS-US interval could reliably produce changes in the functional properties of the interposito-thalamo-cortical pathways (INTCps). Five cats were prepared for chronic stimulation and recording from several brain regions along this pathway in awake animals. The CS was a weak electric shock applied on the interposed nucleus of the cerebellum in sites that initially elicited forelimb flexion (i.e., alpha motor responses) in three cats, and equal proportions of flexor and extensor responses in two cats. The US was an electric shock applied on the skin that elicited forelimb flexions. The motor and neurobiological effects of synchronous CS-US were compared with pairings in which the CS was applied 100 ms before US. Simultaneous and sequential application of CS and US produced different behavioral outcomes and resulted in different neural processes in the interposito-thalamo-cortical pathways (INTCps). The simultaneous presentation of stimuli only produced a small increase in excitability spreading to all the body representational zones of the primary motor cortex and a weak increase in the amplitude of the alpha motor response. In contrast, the sequential application led to a profound modification of the interposed output to neurons in the forelimb representation of the motor cortex. These robust neuronal correlates of conditioning were accompanied by a large facilitation of the alpha motor response (alpha-MR). There were also changes in the direction of misdirected alpha responses and an emergence of functionally appropriate, long-latency withdrawal forelimb flexion. These data revealed that, during conditioning, plastic changes within the thalamocortical connections are selectively induced by sequential information from central and peripheral afferents. This sequence significantly contributed to neural processes that are responsible for the acquisition, expression, and extinction of anticipatory flexion responses.
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H.; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6–8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods. PMID:24505729
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang
2013-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.
Cortical representations of communication sounds.
Heiser, Marc A; Cheung, Steven W
2008-10-01
This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.
Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex
Jeong, Su Keun
2016-01-01
The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642
Robust generative asymmetric GMM for brain MR image segmentation.
Ji, Zexuan; Xia, Yong; Zheng, Yuhui
2017-11-01
Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.
Neurons in the human hippocampus and amygdala respond to both low- and high-level image properties
Cabrales, Elaine; Wilson, Michael S.; Baker, Christopher P.; Thorp, Christopher K.; Smith, Kris A.; Treiman, David M.
2011-01-01
A large number of studies have demonstrated that structures within the medial temporal lobe, such as the hippocampus, are intimately involved in declarative memory for objects and people. Although these items are abstractions of the visual scene, specific visual details can change the speed and accuracy of their recall. By recording from 415 neurons in the hippocampus and amygdala of human epilepsy patients as they viewed images drawn from 10 image categories, we showed that the firing rates of 8% of these neurons encode image illuminance and contrast, low-level properties not directly pertinent to task performance, whereas in 7% of the neurons, firing rates encode the category of the item depicted in the image, a high-level property pertinent to the task. This simultaneous representation of high- and low-level image properties within the same brain areas may serve to bind separate aspects of visual objects into a coherent percept and allow episodic details of objects to influence mnemonic performance. PMID:21471400
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
The interactions of multisensory integration with endogenous and exogenous attention
Tang, Xiaoyu; Wu, Jinglong; Shen, Yong
2016-01-01
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. PMID:26546734
Bilateral Theta-Burst TMS to Influence Global Gestalt Perception
Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto
2012-01-01
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres. PMID:23110106
Bilateral theta-burst TMS to influence global gestalt perception.
Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto
2012-01-01
While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects - a deficit termed simultanagnosia - greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.
Hsu, Nina S.; Jaeggi, Susanne M.; Novick, Jared M.
2017-01-01
Regions within the left inferior frontal gyrus (LIFG) have simultaneously been implicated in syntactic processing and cognitive control. Accounts attempting to unify LIFG’s function hypothesize that, during comprehension, cognitive control resolves conflict between incompatible representations of sentence meaning. Some studies demonstrate co-localized activity within LIFG for syntactic and non-syntactic conflict resolution, suggesting domain-generality, but others show non-overlapping activity, suggesting domain-specific cognitive control and/or regions that respond uniquely to syntax. We propose however that examining exclusive activation sites for certain contrasts creates a false dichotomy: both domain-general and domain-specific neural machinery must coordinate to facilitate conflict resolution across domains. Here, subjects completed four diverse tasks involving conflict —one syntactic, three non-syntactic— while undergoing fMRI. Though LIFG consistently activated within individuals during conflict processing, functional connectivity analyses revealed task-specific coordination with distinct brain networks. Thus, LIFG may function as a conflict-resolution “hub” that cooperates with specialized neural systems according to information content. PMID:28110105
The interactions of multisensory integration with endogenous and exogenous attention.
Tang, Xiaoyu; Wu, Jinglong; Shen, Yong
2016-02-01
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
H-SLAM: Rao-Blackwellized Particle Filter SLAM Using Hilbert Maps.
Vallicrosa, Guillem; Ridao, Pere
2018-05-01
Occupancy Grid maps provide a probabilistic representation of space which is important for a variety of robotic applications like path planning and autonomous manipulation. In this paper, a SLAM (Simultaneous Localization and Mapping) framework capable of obtaining this representation online is presented. The H-SLAM (Hilbert Maps SLAM) is based on Hilbert Map representation and uses a Particle Filter to represent the robot state. Hilbert Maps offer a continuous probabilistic representation with a small memory footprint. We present a series of experimental results carried both in simulation and with real AUVs (Autonomous Underwater Vehicles). These results demonstrate that our approach is able to represent the environment more consistently while capable of running online.
Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang
2017-05-01
Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l 1 -norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a "connectivity strength-weighted sparse group constraint." In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. Hum Brain Mapp 38:2370-2383, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Neural representations of the sense of self
Klemm, William R.
2011-01-01
The brain constructs representations of what is sensed and thought about in the form of nerve impulses that propagate in circuits and network assemblies (Circuit Impulse Patterns, CIPs). CIP representations of which humans are consciously aware occur in the context of a sense of self. Thus, research on mechanisms of consciousness might benefit from a focus on how a conscious sense of self is represented in brain. Like all senses, the sense of self must be contained in patterns of nerve impulses. Unlike the traditional senses that are registered by impulse flow in relatively simple, pauci-synaptic projection pathways, the sense of self is a system- level phenomenon that may be generated by impulse patterns in widely distributed complex and interacting circuits. The problem for researchers then is to identify the CIPs that are unique to conscious experience. Also likely to be of great relevance to constructing the representation of self are the coherence shifts in activity timing relations among the circuits. Consider that an embodied sense of self is generated and contained as unique combinatorial temporal patterns across multiple neurons in each circuit that contributes to constructing the sense of self. As with other kinds of CIPs, those representing the sense of self can be learned from experience, stored in memory, modified by subsequent experiences, and expressed in the form of decisions, choices, and commands. These CIPs are proposed here to be the actual physical basis for conscious thought and the sense of self. When active in wakefulness or dream states, the CIP representations of self act as an agent of the brain, metaphorically as an avatar. Because the selfhood CIP patterns may only have to represent the self and not directly represent the inner and outer worlds of embodied brain, the self representation should have more degrees of freedom than subconscious mind and may therefore have some capacity for a free-will mind of its own. S everal lines of evidence for this theory are reviewed. Suggested new research includes identifying distinct combinatorially coded impulse patterns and their temporal coherence shifts in defined circuitry, such as neocortical microcolumns. This task might be facilitated by identifying the micro-topography of field-potential oscillatory coherences among various regions and between different frequencies associated with specific conscious mentation. Other approaches can include identifying the changes in discrete conscious operations produced by focal trans-cranial magnetic stimulation. PMID:21826192
Learning-Induced Plasticity in Medial Prefrontal Cortex Predicts Preference Malleability
Garvert, Mona M.; Moutoussis, Michael; Kurth-Nelson, Zeb; Behrens, Timothy E.J.; Dolan, Raymond J.
2015-01-01
Summary Learning induces plasticity in neuronal networks. As neuronal populations contribute to multiple representations, we reasoned plasticity in one representation might influence others. We used human fMRI repetition suppression to show that plasticity induced by learning another individual’s values impacts upon a value representation for oneself in medial prefrontal cortex (mPFC), a plasticity also evident behaviorally in a preference shift. We show this plasticity is driven by a striatal “prediction error,” signaling the discrepancy between the other’s choice and a subject’s own preferences. Thus, our data highlight that mPFC encodes agent-independent representations of subjective value, such that prediction errors simultaneously update multiple agents’ value representations. As the resulting change in representational similarity predicts interindividual differences in the malleability of subjective preferences, our findings shed mechanistic light on complex human processes such as the powerful influence of social interaction on beliefs and preferences. PMID:25611512
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
What does semantic tiling of the cortex tell us about semantics?
Barsalou, Lawrence W
2017-10-01
Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) feature and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel fiber-free technique for brain activity imaging in multiple freely behaving mice
NASA Astrophysics Data System (ADS)
Inagaki, Shigenori; Agetsuma, Masakazu; Nagai, Takeharu
2018-02-01
Brain functions and related psychiatric disorders have been investigated by recording electrophysiological field potential. When recording it, a conventional method requires fiber-based apparatus connected to the brain, which however hampers the simultaneous measurement in multiple animals (e.g. by a tangle of fibers). Here, we propose a fiber-free recording technique in conjunction with a ratiometric bioluminescent voltage indicator. Our method allows investigation of electrophysiological filed potential dynamics in multiple freely behaving animals simultaneously over a long time period. Therefore, this fiber-free technique opens up the way to investigate a new mechanism of brain function that governs social behaviors and animal-to-animal interaction.
Mather, Mara; Clewett, David; Sakaki, Michiko; Harley, Carolyn W
2016-01-01
Emotional arousal enhances perception and memory of high-priority information but impairs processing of other information. Here, we propose that, under arousal, local glutamate levels signal the current strength of a representation and interact with norepinephrine (NE) to enhance high priority representations and out-compete or suppress lower priority representations. In our "glutamate amplifies noradrenergic effects" (GANE) model, high glutamate at the site of prioritized representations increases local NE release from the locus coeruleus (LC) to generate "NE hotspots." At these NE hotspots, local glutamate and NE release are mutually enhancing and amplify activation of prioritized representations. In contrast, arousal-induced LC activity inhibits less active representations via two mechanisms: 1) Where there are hotspots, lateral inhibition is amplified; 2) Where no hotspots emerge, NE levels are only high enough to activate low-threshold inhibitory adrenoreceptors. Thus, LC activation promotes a few hotspots of excitation in the context of widespread suppression, enhancing high priority representations while suppressing the rest. Hotspots also help synchronize oscillations across neural ensembles transmitting high-priority information. Furthermore, brain structures that detect stimulus priority interact with phasic NE release to preferentially route such information through large-scale functional brain networks. A surge of NE before, during, or after encoding enhances synaptic plasticity at NE hotspots, triggering local protein synthesis processes that enhance selective memory consolidation. Together, these noradrenergic mechanisms promote selective attention and memory under arousal. GANE not only reconciles apparently contradictory findings in the emotion-cognition literature but also extends previous influential theories of LC neuromodulation by proposing specific mechanisms for how LC-NE activity increases neural gain.
ERIC Educational Resources Information Center
Demir, Özlem Ece; Prado, Jérôme; Booth, James R.
2015-01-01
We examined the relation of parental socioeconomic status (SES) to the neural bases of subtraction in school-age children (9- to 12-year-olds). We independently localized brain regions subserving verbal versus visuo-spatial representations to determine whether the parental SES-related differences in children's reliance on these neural…
ERIC Educational Resources Information Center
Perianez, Jose A.; Barcelo, Francisco
2009-01-01
Task-cueing studies suggest that the updating of sensory and task representations both contribute to behavioral task-switch costs [Forstmann, B. U., Brass, M., & Koch, I. (2007). "Methodological and empirical issues when dissociating cue-related from task-related processes in the explicit task-cuing procedure." "Psychological Research, 71"(4),…
Body Schematics: On the Role of the Body Schema in Embodied Lexical-Semantic Representations
ERIC Educational Resources Information Center
Rueschemeyer, Shirley-Ann; Pfeiffer, Christian; Bekkering, Harold
2010-01-01
Words denoting manipulable objects activate sensorimotor brain areas, likely reflecting action experience with the denoted objects. In particular, these sensorimotor lexical representations have been found to reflect the way in which an object is used. In the current paper we present data from two experiments (one behavioral and one neuroimaging)…
Executive Control in Bilingual Language Processing
ERIC Educational Resources Information Center
Rodriguez-Fornells, A.; Balaguer, R. De Deigo; Munte, T. F.
2006-01-01
Little is known in cognitive neuroscience about the brain mechanisms and brain representations involved in bilingual language processing. On the basis of previous studies on switching and bilingualism, it has been proposed that executive functions are engaged in the control and regulation of the languages in use. Here, we review the existing…
Variability in Cortical Representations of Speech Sound Perception
ERIC Educational Resources Information Center
Boatman, Dana F.
2007-01-01
Recent brain mapping studies have provided new insights into the cortical systems that mediate human speech perception. Electrocortical stimulation mapping (ESM) is a brain mapping method that is used clinically to localize cortical functions in neurosurgical patients. Recent ESM studies have yielded new insights into the cortical systems that…
Does Functional Neuroimaging Solve the Questions of Neurolinguistics?
ERIC Educational Resources Information Center
Sidtis, Diana Van Lancker
2006-01-01
Neurolinguistic research has been engaged in evaluating models of language using measures from brain structure and function, and/or in investigating brain structure and function with respect to language representation using proposed models of language. While the aphasiological strategy, which classifies aphasias based on performance modality and a…
Brain-Wide Maps of "Fos" Expression during Fear Learning and Recall
ERIC Educational Resources Information Center
Cho, Jin-Hyung; Rendall, Sam D.; Gray, Jesse M.
2017-01-01
"Fos" induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which "Fos" induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide…
Malone, Patrick S; Glezer, Laurie S; Kim, Judy; Jiang, Xiong; Riesenhuber, Maximilian
2016-09-28
The neural substrates of semantic representation have been the subject of much controversy. The study of semantic representations is complicated by difficulty in disentangling perceptual and semantic influences on neural activity, as well as in identifying stimulus-driven, "bottom-up" semantic selectivity unconfounded by top-down task-related modulations. To address these challenges, we trained human subjects to associate pseudowords (TPWs) with various animal and tool categories. To decode semantic representations of these TPWs, we used multivariate pattern classification of fMRI data acquired while subjects performed a semantic oddball detection task. Crucially, the classifier was trained and tested on disjoint sets of TPWs, so that the classifier had to use the semantic information from the training set to correctly classify the test set. Animal and tool TPWs were successfully decoded based on fMRI activity in spatially distinct subregions of the left medial anterior temporal lobe (LATL). In addition, tools (but not animals) were successfully decoded from activity in the left inferior parietal lobule. The tool-selective LATL subregion showed greater functional connectivity with left inferior parietal lobule and ventral premotor cortex, indicating that each LATL subregion exhibits distinct patterns of connectivity. Our findings demonstrate category-selective organization of semantic representations in LATL into spatially distinct subregions, continuing the lateral-medial segregation of activation in posterior temporal cortex previously observed in response to images of animals and tools, respectively. Together, our results provide evidence for segregation of processing hierarchies for different classes of objects and the existence of multiple, category-specific semantic networks in the brain. The location and specificity of semantic representations in the brain are still widely debated. We trained human participants to associate specific pseudowords with various animal and tool categories, and used multivariate pattern classification of fMRI data to decode the semantic representations of the trained pseudowords. We found that: (1) animal and tool information was organized in category-selective subregions of medial left anterior temporal lobe (LATL); (2) tools, but not animals, were encoded in left inferior parietal lobe; and (3) LATL subregions exhibited distinct patterns of functional connectivity with category-related regions across cortex. Our findings suggest that semantic knowledge in LATL is organized in category-related subregions, providing evidence for the existence of multiple, category-specific semantic representations in the brain. Copyright © 2016 the authors 0270-6474/16/3610089-08$15.00/0.
Scholkmann, Felix; Holper, Lisa; Wolf, Ursula; Wolf, Martin
2013-11-27
Since the first demonstration of how to simultaneously measure brain activity using functional magnetic resonance imaging (fMRI) on two subjects about 10 years ago, a new paradigm in neuroscience is emerging: measuring brain activity from two or more people simultaneously, termed "hyperscanning". The hyperscanning approach has the potential to reveal inter-personal brain mechanisms underlying interaction-mediated brain-to-brain coupling. These mechanisms are engaged during real social interactions, and cannot be captured using single-subject recordings. In particular, functional near-infrared imaging (fNIRI) hyperscanning is a promising new method, offering a cost-effective, easy to apply and reliable technology to measure inter-personal interactions in a natural context. In this short review we report on fNIRI hyperscanning studies published so far and summarize opportunities and challenges for future studies.
The Hierarchical Cortical Organization of Human Speech Processing
de Heer, Wendy A.; Huth, Alexander G.; Griffiths, Thomas L.
2017-01-01
Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here. PMID:28588065
Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan
2009-01-01
We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.
Orthographic recognition in late adolescents: an assessment through event-related brain potentials.
González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth
2014-04-01
Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.
Brands, Ingrid M H; Wade, Derick T; Stapert, Sven Z; van Heugten, Caroline M
2012-09-01
To describe a new model of the adaptation process following acquired brain injury, based on the patient's goals, the patient's abilities and the emotional response to the changes and the possible discrepancy between goals and achievements. The process of adaptation after acquired brain injury is characterized by a continuous interaction of two processes: achieving maximal restoration of function and adjusting to the alterations and losses that occur in the various domains of functioning. Consequently, adaptation requires a balanced mix of restoration-oriented coping and loss-oriented coping. The commonly used framework to explain adaptation and coping, 'The Theory of Stress and Coping' of Lazarus and Folkman, does not capture this interactive duality. This model additionally considers theories concerned with self-regulation of behaviour, self-awareness and self-efficacy, and with the setting and achievement of goals. THE TWO-DIMENSIONAL MODEL: Our model proposes the simultaneous and continuous interaction of two pathways; goal pursuit (short term and long term) or revision as a result of success and failure in reducing distance between current state and expected future state and an affective response that is generated by the experienced goal-performance discrepancies. This affective response, in turn, influences the goals set. This two-dimensional representation covers the processes mentioned above: restoration of function and consideration of long-term limitations. We propose that adaptation centres on readjustment of long-term goals to new achievable but desired and important goals, and that this adjustment underlies re-establishing emotional stability. We discuss how the proposed model is related to actual rehabilitation practice.
Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei
2011-01-01
One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.
Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei
2011-01-01
One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: “old people” and “young people.” These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration. PMID:21750692
The proactive brain: memory for predictions
Bar, Moshe
2009-01-01
It is proposed that the human brain is proactive in that it continuously generates predictions that anticipate the relevant future. In this proposal, analogies are derived from elementary information that is extracted rapidly from the input, to link that input with the representations that exist in memory. Finding an analogical link results in the generation of focused predictions via associative activation of representations that are relevant to this analogy, in the given context. Predictions in complex circumstances, such as social interactions, combine multiple analogies. Such predictions need not be created afresh in new situations, but rather rely on existing scripts in memory, which are the result of real as well as of previously imagined experiences. This cognitive neuroscience framework provides a new hypothesis with which to consider the purpose of memory, and can help explain a variety of phenomena, ranging from recognition to first impressions, and from the brain's ‘default mode’ to a host of mental disorders. PMID:19528004
Applying Current Concepts in Pain-Related Brain Science to Dance Rehabilitation.
Wallwork, Sarah B; Bellan, Valeria; Moseley, G Lorimer
2017-03-01
Dance involves exemplary sensory-motor control, which is subserved by sophisticated neural processing at the spinal cord and brain level. Such neural processing is altered in the presence of nociception and pain, and the adaptations within the central nervous system that are known to occur with persistent nociception or pain have clear implications for movement and, indeed, risk of further injury. Recent rapid advances in our understanding of the brain's representation of the body and the role of cortical representations, or "neurotags," in bodily protection and regulation have given rise to new strategies that are gaining traction in sports medicine. Those strategies are built on the principles that govern the operation of neurotags and focus on minimizing the impact of pain, injury, and immobilization on movement control and optimal performance. Here we apply empirical evidence from the chronic pain clinical neurosciences to introduce new opportunities for rehabilitation after dance injury.
Familiarity promotes the blurring of self and other in the neural representation of threat
Beckes, Lane; Hasselmo, Karen
2013-01-01
Neurobiological investigations of empathy often support an embodied simulation account. Using functional magnetic resonance imaging (fMRI), we monitored statistical associations between brain activations indicating self-focused threat to those indicating threats to a familiar friend or an unfamiliar stranger. Results in regions such as the anterior insula, putamen and supramarginal gyrus indicate that self-focused threat activations are robustly correlated with friend-focused threat activations but not stranger-focused threat activations. These results suggest that one of the defining features of human social bonding may be increasing levels of overlap between neural representations of self and other. This article presents a novel and important methodological approach to fMRI empathy studies, which informs how differences in brain activation can be detected in such studies and how covariate approaches can provide novel and important information regarding the brain and empathy. PMID:22563005
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Dynamic changes in brain activity during prism adaptation.
Luauté, Jacques; Schwartz, Sophie; Rossetti, Yves; Spiridon, Mona; Rode, Gilles; Boisson, Dominique; Vuilleumier, Patrik
2009-01-07
Prism adaptation does not only induce short-term sensorimotor plasticity, but also longer-term reorganization in the neural representation of space. We used event-related fMRI to study dynamic changes in brain activity during both early and prolonged exposure to visual prisms. Participants performed a pointing task before, during, and after prism exposure. Measures of trial-by-trial pointing errors and corrections allowed parametric analyses of brain activity as a function of performance. We show that during the earliest phase of prism exposure, anterior intraparietal sulcus was primarily implicated in error detection, whereas parieto-occipital sulcus was implicated in error correction. Cerebellum activity showed progressive increases during prism exposure, in accordance with a key role for spatial realignment. This time course further suggests that the cerebellum might promote neural changes in superior temporal cortex, which was selectively activated during the later phase of prism exposure and could mediate the effects of prism adaptation on cognitive spatial representations.
Beyond Natural Numbers: Negative Number Representation in Parietal Cortex
Blair, Kristen P.; Rosenberg-Lee, Miriam; Tsang, Jessica M.; Schwartz, Daniel L.; Menon, Vinod
2012-01-01
Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation. PMID:22363276
Beyond natural numbers: negative number representation in parietal cortex.
Blair, Kristen P; Rosenberg-Lee, Miriam; Tsang, Jessica M; Schwartz, Daniel L; Menon, Vinod
2012-01-01
Unlike natural numbers, negative numbers do not have natural physical referents. How does the brain represent such abstract mathematical concepts? Two competing hypotheses regarding representational systems for negative numbers are a rule-based model, in which symbolic rules are applied to negative numbers to translate them into positive numbers when assessing magnitudes, and an expanded magnitude model, in which negative numbers have a distinct magnitude representation. Using an event-related functional magnetic resonance imaging design, we examined brain responses in 22 adults while they performed magnitude comparisons of negative and positive numbers that were quantitatively near (difference <4) or far apart (difference >6). Reaction times (RTs) for negative numbers were slower than positive numbers, and both showed a distance effect whereby near pairs took longer to compare. A network of parietal, frontal, and occipital regions were differentially engaged by negative numbers. Specifically, compared to positive numbers, negative number processing resulted in greater activation bilaterally in intraparietal sulcus (IPS), middle frontal gyrus, and inferior lateral occipital cortex. Representational similarity analysis revealed that neural responses in the IPS were more differentiated among positive numbers than among negative numbers, and greater differentiation among negative numbers was associated with faster RTs. Our findings indicate that despite negative numbers engaging the IPS more strongly, the underlying neural representation are less distinct than that of positive numbers. We discuss our findings in the context of the two theoretical models of negative number processing and demonstrate how multivariate approaches can provide novel insights into abstract number representation.
Lord, Louis-David; Stevner, Angus B.; Kringelbach, Morten L.
2017-01-01
To survive in an ever-changing environment, the brain must seamlessly integrate a rich stream of incoming information into coherent internal representations that can then be used to efficiently plan for action. The brain must, however, balance its ability to integrate information from various sources with a complementary capacity to segregate information into modules which perform specialized computations in local circuits. Importantly, evidence suggests that imbalances in the brain's ability to bind together and/or segregate information over both space and time is a common feature of several neuropsychiatric disorders. Most studies have, however, until recently strictly attempted to characterize the principles of integration and segregation in static (i.e. time-invariant) representations of human brain networks, hence disregarding the complex spatio-temporal nature of these processes. In the present Review, we describe how the emerging discipline of whole-brain computational connectomics may be used to study the causal mechanisms of the integration and segregation of information on behaviourally relevant timescales. We emphasize how novel methods from network science and whole-brain computational modelling can expand beyond traditional neuroimaging paradigms and help to uncover the neurobiological determinants of the abnormal integration and segregation of information in neuropsychiatric disorders. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’. PMID:28507228
Hausfeld, Lars; Riecke, Lars; Formisano, Elia
2018-06-01
Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Vallianatou, Theodosia; Strittmatter, Nicole; Nilsson, Anna; Shariatgorji, Mohammadreza; Hamm, Gregory; Pereira, Marcela; Källback, Patrik; Svenningsson, Per; Karlgren, Maria; Goodwin, Richard J A; Andrén, Per E
2018-05-15
There is a high need to develop quantitative imaging methods capable of providing detailed brain localization information of several molecular species simultaneously. In addition, extensive information on the effect of the blood-brain barrier on the penetration, distribution and efficacy of neuroactive compounds is required. Thus, we have developed a mass spectrometry imaging method to visualize and quantify the brain distribution of drugs with varying blood-brain barrier permeability. With this approach, we were able to determine blood-brain barrier transport of different drugs and define the drug distribution in very small brain structures (e.g., choroid plexus) due to the high spatial resolution provided. Simultaneously, we investigated the effect of drug-drug interactions by inhibiting the membrane transporter multidrug resistance 1 protein. We propose that the described approach can serve as a valuable analytical tool during the development of neuroactive drugs, as it can provide physiologically relevant information often neglected by traditional imaging technologies. Copyright © 2018. Published by Elsevier Inc.
Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey
2011-12-01
All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is predicted to prevent the reset of spatial attention, which would otherwise keep the representations of multiple objects from being combined by learning. Li and DiCarlo (2008) have presented neurophysiological data from monkeys showing how unsupervised natural experience in a target swapping experiment can rapidly alter object representations in IT. The model quantitatively simulates the swapping data by showing how the swapping procedure fools the spatial attention mechanism. More generally, the model provides a unifying framework, and testable predictions in both monkeys and humans, for understanding object learning data using neurophysiological methods in monkeys, and spatial attention, episodic learning, and memory retrieval data using functional imaging methods in humans. Copyright © 2011 Elsevier Ltd. All rights reserved.
Yang, Ping; Fan, Chenggui; Wang, Min; Fogelson, Noa; Li, Ling
2017-08-15
Object identity and location are bound together to form a unique integration that is maintained and processed in visual working memory (VWM). Changes in task-irrelevant object location have been shown to impair the retrieval of memorial representations and the detection of object identity changes. However, the neural correlates of this cognitive process remain largely unknown. In the present study, we aim to investigate the underlying brain activation during object color change detection and the modulatory effects of changes in object location and VWM load. To this end we used simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings, which can reveal the neural activity with both high temporal and high spatial resolution. Subjects responded faster and with greater accuracy in the repeated compared to the changed object location condition, when a higher VWM load was utilized. These results support the spatial congruency advantage theory and suggest that it is more pronounced with higher VWM load. Furthermore, the spatial congruency effect was associated with larger posterior N1 activity, greater activation of the right inferior frontal gyrus (IFG) and less suppression of the right supramarginal gyrus (SMG), when object location was repeated compared to when it was changed. The ERP-fMRI integrative analysis demonstrated that the object location discrimination-related N1 component is generated in the right SMG. Copyright © 2017 Elsevier Inc. All rights reserved.
Lexical is as lexical does: computational approaches to lexical representation
Woollams, Anna M.
2015-01-01
In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204
Klein, Denise; Mok, Kelvin; Chen, Jen-Kai; Watkins, Kate E
2014-04-01
We examined the effects of learning a second language (L2) on brain structure. Cortical thickness was measured in the MRI datasets of 22 monolinguals and 66 bilinguals. Some bilingual subjects had learned both languages simultaneously (0-3 years) while some had learned their L2 after achieving proficiency in their first language during either early (4-7 years) or late childhood (8-13 years). Later acquisition of L2 was associated with significantly thicker cortex in the left inferior frontal gyrus (IFG) and thinner cortex in the right IFG. These effects were seen in the group comparisons of monolinguals, simultaneous bilinguals and early and late bilinguals. Within the bilingual group, significant correlations between age of acquisition of L2 and cortical thickness were seen in the same regions: cortical thickness correlated with age of acquisition positively in the left IFG and negatively in the right IFG. Interestingly, the monolinguals and simultaneous bilinguals did not differ in cortical thickness in any region. Our results show that learning a second language after gaining proficiency in the first language modifies brain structure in an age-dependent manner whereas simultaneous acquisition of two languages has no additional effect on brain development. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling functional neuroanatomy for an anatomy information system.
Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan
2008-01-01
Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.
Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.
Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen
2017-08-29
In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.
Attention during natural vision warps semantic representation across the human brain.
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G; Gallant, Jack L
2013-06-01
Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
Attention During Natural Vision Warps Semantic Representation Across the Human Brain
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G.; Gallant, Jack L.
2013-01-01
Little is known about how attention changes the cortical representation of sensory information in humans. Based on neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue we used functional MRI (fMRI) to measure how semantic representation changes when searching for different object categories in natural movies. We find that many voxels across occipito-temporal and fronto-parietal cortex shift their tuning toward the attended category. These tuning shifts expand the representation of the attended category and of semantically-related but unattended categories, and compress the representation of categories semantically-dissimilar to the target. Attentional warping of semantic representation occurs even when the attended category is not present in the movie, thus the effect is not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision. PMID:23603707
See it with feeling: affective predictions during object perception
Barrett, L.F.; Bar, Moshe
2009-01-01
People see with feeling. We ‘gaze’, ‘behold’, ‘stare’, ‘gape’ and ‘glare’. In this paper, we develop the hypothesis that the brain's ability to see in the present incorporates a representation of the affective impact of those visual sensations in the past. This representation makes up part of the brain's prediction of what the visual sensations stand for in the present, including how to act on them in the near future. The affective prediction hypothesis implies that responses signalling an object's salience, relevance or value do not occur as a separate step after the object is identified. Instead, affective responses support vision from the very moment that visual stimulation begins. PMID:19528014
Greater neural pattern similarity across repetitions is associated with better memory.
Xue, Gui; Dong, Qi; Chen, Chuansheng; Lu, Zhonglin; Mumford, Jeanette A; Poldrack, Russell A
2010-10-01
Repeated study improves memory, but the underlying neural mechanisms of this improvement are not well understood. Using functional magnetic resonance imaging and representational similarity analysis of brain activity, we found that, compared with forgotten items, subsequently remembered faces and words showed greater similarity in neural activation across multiple study in many brain regions, including (but not limited to) the regions whose mean activities were correlated with subsequent memory. This result addresses a longstanding debate in the study of memory by showing that successful episodic memory encoding occurs when the same neural representations are more precisely reactivated across study episodes, rather than when patterns of activation are more variable across time.
Martin, Alex
2016-08-01
In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed.
ERIC Educational Resources Information Center
Waisman, Ilana; Leikin, Mark; Shaul, Shelley; Leikin, Roza
2014-01-01
In this study, we examine the impact and the interplay of general giftedness (G) and excellence in mathematics (EM) on high school students' mathematical performance associated with translations from graphical to symbolic representations of functions, as reflected in cortical electrical activity (by means of ERP--event-related…
ERIC Educational Resources Information Center
Pobric, Gorana; Jefferies, Elizabeth; Ralph, Matthew A. Lambon
2010-01-01
The key question of how the brain codes the meaning of words and pictures is the focus of vigorous debate. Is there a "semantic hub" in the temporal poles where these different inputs converge to form amodal conceptual representations? Alternatively, are there distinct neural circuits that underpin our comprehension of pictures and words?…
ERIC Educational Resources Information Center
Holloway, Ian D.; Ansari, Daniel
2010-01-01
Because number is an abstract quality of a set, the way in which a number is externally represented does not change its quantitative meaning. In this study, we examined the development of the brain regions that support format-independent representation of numerical magnitude. We asked children and adults to perform both symbolic (Hindu-Arabic…
Bowers, Jeffrey S
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.
Pain, dissociation and subliminal self-representations.
Bob, Petr
2008-03-01
According to recent evidence, neurophysiological processes coupled to pain are closely related to the mechanisms of consciousness. This evidence is in accordance with findings that changes in states of consciousness during hypnosis or traumatic dissociation strongly affect conscious perception and experience of pain, and markedly influence brain functions. Past research indicates that painful experience may induce dissociated state and information about the experience may be stored or processed unconsciously. Reported findings suggest common neurophysiological mechanisms of pain and dissociation and point to a hypothesis of dissociation as a defense mechanism against psychological and physical pain that substantially influences functions of consciousness. The hypothesis is also supported by findings that information can be represented in the mind/brain without the subject's awareness. The findings of unconsciously present information suggest possible binding between conscious contents and self-functions that constitute self-representational dimensions of consciousness. The self-representation means that certain inner states of own body are interpreted as mental and somatic identity, while other bodily signals, currently not accessible to the dominant interpreter's access are dissociated and may be defined as subliminal self-representations. In conclusion, the neurophysiological aspects of consciousness and its integrative role in the therapy of painful traumatic memories are discussed.
How the Human Brain Represents Perceived Dangerousness or “Predacity” of Animals
Sha, Long; Guntupalli, J. Swaroop; Oosterhof, Nikolaas; Halchenko, Yaroslav O.; Nastase, Samuel A.; di Oleggio Castello, Matteo Visconti; Abdi, Hervé; Jobst, Barbara C.; Gobbini, M. Ida; Haxby, James V.
2016-01-01
Common or folk knowledge about animals is dominated by three dimensions: (1) level of cognitive complexity or “animacy;” (2) dangerousness or “predacity;” and (3) size. We investigated the neural basis of the perceived dangerousness or aggressiveness of animals, which we refer to more generally as “perception of threat.” Using functional magnetic resonance imaging (fMRI), we analyzed neural activity evoked by viewing images of animal categories that spanned the dissociable semantic dimensions of threat and taxonomic class. The results reveal a distributed network for perception of threat extending along the right superior temporal sulcus. We compared neural representational spaces with target representational spaces based on behavioral judgments and a computational model of early vision and found a processing pathway in which perceived threat emerges as a dominant dimension: whereas visual features predominate in early visual cortex and taxonomy in lateral occipital and ventral temporal cortices, these dimensions fall away progressively from posterior to anterior temporal cortices, leaving threat as the dominant explanatory variable. Our results suggest that the perception of threat in the human brain is associated with neural structures that underlie perception and cognition of social actions and intentions, suggesting a broader role for these regions than has been thought previously, one that includes the perception of potential threat from agents independent of their biological class. SIGNIFICANCE STATEMENT For centuries, philosophers have wondered how the human mind organizes the world into meaningful categories and concepts. Today this question is at the core of cognitive science, but our focus has shifted to understanding how knowledge manifests in dynamic activity of neural systems in the human brain. This study advances the young field of empirical neuroepistemology by characterizing the neural systems engaged by an important dimension in our cognitive representation of the animal kingdom ontological subdomain: how the brain represents the perceived threat, dangerousness, or “predacity” of animals. Our findings reveal how activity for domain-specific knowledge of animals overlaps the social perception networks of the brain, suggesting domain-general mechanisms underlying the representation of conspecifics and other animals. PMID:27170133
Neural Representations Used by Brain Regions Underlying Speech Production
ERIC Educational Resources Information Center
Segawa, Jennifer Anne
2013-01-01
Speech utterances are phoneme sequences but may not always be represented as such in the brain. For instance, electropalatography evidence indicates that as speaking rate increases, gestures within syllables are manipulated separately but those within consonant clusters act as one motor unit. Moreover, speech error data suggest that a syllable's…
Actionability and Simulation: No Representation without Communication
Feldman, Jerome A.
2016-01-01
There remains considerable controversy about how the brain operates. This review focuses on brain activity rather than just structure and on concepts of action and actionability rather than truth conditions. Neural Communication is reviewed as a crucial aspect of neural encoding. Consequently, logical inference is superseded by neural simulation. Some remaining mysteries are discussed. PMID:27725807
How Different Types of Conceptual Relations Modulate Brain Activation during Semantic Priming
ERIC Educational Resources Information Center
Sachs, Olga; Weis, Susanne; Zellagui, Nadia; Sass, Katharina; Huber, Walter; Zvyagintsev, Mikhail; Mathiak, Klaus; Kircher, Tilo
2011-01-01
Semantic priming, a well-established technique to study conceptual representation, has thus far produced variable fMRI results, both regarding the type of priming effects and their correlation with brain activation. The aims of the current study were (a) to investigate two types of semantic relations--categorical versus associative--under…
van Dijck, Jean-Philippe; Gevers, Wim; Lafosse, Christophe; Fias, Wim
2013-10-01
Brain damaged patients suffering from representational neglect (RN) fail to report, orient to, or verbally describe contra-lesional elements of imagined environments or objects. So far this disorder has only been reported after right brain damage, leading to the idea that only the right hemisphere is involved in this deficit. A widely accepted account attributes RN to a lateralized impairment in the visuospatial component of working memory. So far, however, this hypothesis has not been tested in detail. In the present paper, we describe, for the first time, the case of a left brain damaged patient suffering from right-sided RN while imagining both known and new environments and objects. An in-depth evaluation of her visuospatial working memory abilities, with special focus on the presence of a lateralized deficit, did not reveal any abnormality. In sharp contrast, her ability to memorize visual information was severely compromised. The implications of these results are discussed in the light of recent insights in the neglect syndrome. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bidelman, Gavin M; Dexter, Lauren
2015-04-01
We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lindsey, Brooks D.; Ivancevich, Nikolas M.; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A.; Laskowitz, Daniel T.; Smith, Stephen W.
2009-02-01
We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time 3D scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64° pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128° sector, two simultaneous parasagittal images merged into a 128° × 64° C-mode plane, and a simultaneous 64° axial image. Real-time 3D color Doppler images acquired in initial clinical studies after contrast injection demonstrate flow in several representative blood vessels. An offline Doppler rendering of data from two transducers simultaneously scanning via the temporal windows provides an early visualization of the flow in vessels on both sides of the brain. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission, thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.
BrainSnail: A dynamic information display system for the Sciences
Telefont, Martin; Asaithambi, Asai
2009-01-01
Scientific reference management has become crucial in rapidly expanding fields of biology. Many of the reference management systems currently employed are reference centric and not object/process focused. BrainSnail is a reference management/knowledge representation application that tries to bridge disconnect between subject and reference in the fields of neuropharmacology, neuroanatomy and neurophysiology. BrainSnail has been developed with considering both individual researcher and research group efforts. PMID:19293992
Swain, James E; Ho, S Shaun
2017-01-01
Insensitive parental thoughts and affect, similar to contempt, may be mapped onto a network of basic emotions moderated by attitudinal representations of social-relational value. Brain mechanisms that reflect emotional valence of baby signals among parents vary according to individual differences and show plasticity over time. Furthermore, mental health problems and treatments for parents may affect these brain systems toward or away from contempt, respectively.
Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod
2015-01-01
Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7–9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. PMID:22682904
Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod
2012-02-15
Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7-9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mapping the zebrafish brain methylome using reduced representation bisulfite sequencing
Chatterjee, Aniruddha; Ozaki, Yuichi; Stockwell, Peter A; Horsfield, Julia A; Morison, Ian M; Nakagawa, Shinichi
2013-01-01
Reduced representation bisulfite sequencing (RRBS) has been used to profile DNA methylation patterns in mammalian genomes such as human, mouse and rat. The methylome of the zebrafish, an important animal model, has not yet been characterized at base-pair resolution using RRBS. Therefore, we evaluated the technique of RRBS in this model organism by generating four single-nucleotide resolution DNA methylomes of adult zebrafish brain. We performed several simulations to show the distribution of fragments and enrichment of CpGs in different in silico reduced representation genomes of zebrafish. Four RRBS brain libraries generated 98 million sequenced reads and had higher frequencies of multiple mapping than equivalent human RRBS libraries. The zebrafish methylome indicates there is higher global DNA methylation in the zebrafish genome compared with its equivalent human methylome. This observation was confirmed by RRBS of zebrafish liver. High coverage CpG dinucleotides are enriched in CpG island shores more than in the CpG island core. We found that 45% of the mapped CpGs reside in gene bodies, and 7% in gene promoters. This analysis provides a roadmap for generating reproducible base-pair level methylomes for zebrafish using RRBS and our results provide the first evidence that RRBS is a suitable technique for global methylation analysis in zebrafish. PMID:23975027
Cerebral localization in the nineteenth century--the birth of a science and its modern consequences.
Steinberg, David A
2009-07-01
Although many individuals contributed to the development of the science of cerebral localization, its conceptual framework is the work of a single man--John Hughlings Jackson (1835-1911), a Victorian physician practicing in London. Hughlings Jackson's formulation of a neurological science consisted of an axiomatic basis, an experimental methodology, and a clinical neurophysiology. His axiom--that the brain is an exclusively sensorimotor machine--separated neurology from psychiatry and established a rigorous and sophisticated structure for the brain and mind. Hughlings Jackson's experimental method utilized the focal lesion as a probe of brain function and created an evolutionary structure of somatotopic representation to explain clinical neurophysiology. His scientific theory of cerebral localization can be described as a weighted ordinal representation. Hughlings Jackson's theory of weighted ordinal representation forms the scientific basis for modern neurology. Though this science is utilized daily by every neurologist and forms the basis of neuroscience, the consequences of Hughlings Jackson's ideas are still not generally appreciated. For example, they imply the intrinsic inconsistency of some modern fields of neuroscience and neurology. Thus, "cognitive imaging" and the "neurology of art"--two topics of modern interest--are fundamentally oxymoronic according to the science of cerebral localization. Neuroscientists, therefore, still have much to learn from John Hughlings Jackson.
Ghanbari, Yasser; Smith, Alex R.; Schultz, Robert T.; Verma, Ragini
2014-01-01
Diffusion tensor imaging (DTI) offers rich insights into the physical characteristics of white matter (WM) fiber tracts and their development in the brain, facilitating a network representation of brain’s traffic pathways. Such a network representation of brain connectivity has provided a novel means of investigating brain changes arising from pathology, development or aging. The high dimensionality of these connectivity networks necessitates the development of methods that identify the connectivity building blocks or sub-network components that characterize the underlying variation in the population. In addition, the projection of the subject networks into the basis set provides a low dimensional representation of it, that teases apart different sources of variation in the sample, facilitating variation-specific statistical analysis. We propose a unified framework of non-negative matrix factorization and graph embedding for learning sub-network patterns of connectivity by their projective non-negative decomposition into a reconstructive basis set, as well as, additional basis sets representing variational sources in the population like age and pathology. The proposed framework is applied to a study of diffusion-based connectivity in subjects with autism that shows localized sparse sub-networks which mostly capture the changes related to pathology and developmental variations. PMID:25037933
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others
Christov-Moore, Leonardo; Conway, Paul; Iacoboni, Marco
2017-01-01
The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm. PMID:29311859
Distributed Representation of Visual Objects by Single Neurons in the Human Brain
Valdez, André B.; Papesh, Megan H.; Treiman, David M.; Smith, Kris A.; Goldinger, Stephen D.
2015-01-01
It remains unclear how single neurons in the human brain represent whole-object visual stimuli. While recordings in both human and nonhuman primates have shown distributed representations of objects (many neurons encoding multiple objects), recordings of single neurons in the human medial temporal lobe, taken as subjects' discriminated objects during multiple presentations, have shown gnostic representations (single neurons encoding one object). Because some studies suggest that repeated viewing may enhance neural selectivity for objects, we had human subjects discriminate objects in a single, more naturalistic viewing session. We found that, across 432 well isolated neurons recorded in the hippocampus and amygdala, the average fraction of objects encoded was 26%. We also found that more neurons encoded several objects versus only one object in the hippocampus (28 vs 18%, p < 0.001) and in the amygdala (30 vs 19%, p < 0.001). Thus, during realistic viewing experiences, typical neurons in the human medial temporal lobe code for a considerable range of objects, across multiple semantic categories. PMID:25834044
Mechanisms Underlying Selective Neuronal Tracking of Attended Speech at a ‘Cocktail Party’
Zion Golumbic, Elana M.; Ding, Nai; Bickel, Stephan; Lakatos, Peter; Schevon, Catherine A.; McKhann, Guy M.; Goodman, Robert R.; Emerson, Ronald; Mehta, Ashesh D.; Simon, Jonathan Z.; Poeppel, David; Schroeder, Charles E.
2013-01-01
Summary The ability to focus on and understand one talker in a noisy social environment is a critical social-cognitive capacity, whose underlying neuronal mechanisms are unclear. We investigated the manner in which speech streams are represented in brain activity and the way that selective attention governs the brain’s representation of speech using a ‘Cocktail Party’ Paradigm, coupled with direct recordings from the cortical surface in surgical epilepsy patients. We find that brain activity dynamically tracks speech streams using both low frequency phase and high frequency amplitude fluctuations, and that optimal encoding likely combines the two. In and near low level auditory cortices, attention ‘modulates’ the representation by enhancing cortical tracking of attended speech streams, but ignored speech remains represented. In higher order regions, the representation appears to become more ‘selective,’ in that there is no detectable tracking of ignored speech. This selectivity itself seems to sharpen as a sentence unfolds. PMID:23473326
An evaluation of space time cube representation of spatiotemporal patterns.
Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine
2009-01-01
Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.
Guariglia, Cecilia; Palermo, Liana; Piccardi, Laura; Iaria, Giuseppe; Incoccia, Chiara
2013-01-01
Representational neglect, which is characterized by the failure to report left-sided details of a mental image from memory, can occur after a right hemisphere lesion. In this study, we set out to verify the hypothesis that two distinct forms of representational neglect exist, one involving object representation and the other environmental representation. As representational neglect is considered rare, we also evaluated the prevalence and frequency of its association with perceptual neglect. We submitted a group of 96 unselected, consecutive, chronic, right brain-damaged patients to an extensive neuropsychological evaluation that included two representational neglect tests: the Familiar Square Description Test and the O'Clock Test. Representational neglect, as well as perceptual neglect, was present in about one-third of the sample. Most patients neglected the left side of imagined familiar squares but not the left side of imagined clocks. The present data show that representational neglect is not a rare disorder and also support the hypothesis that two different types of mental representations (i.e. topological and non-topological images) may be selectively damaged in representational neglect. PMID:23874416
A User-Configurable Headstage for Multimodality Neuromonitoring in Freely Moving Rats
Limnuson, Kanokwan; Narayan, Raj K.; Chiluwal, Amrit; Golanov, Eugene V.; Bouton, Chad E.; Li, Chunyan
2016-01-01
Multimodal monitoring of brain activity, physiology, and neurochemistry is an important approach to gain insight into brain function, modulation, and pathology. With recent progress in micro- and nanotechnology, micro-nano-implants have become important catalysts in advancing brain research. However, to date, only a limited number of brain parameters have been measured simultaneously in awake animals in spite of significant recent progress in sensor technology. Here we have provided a cost and time effective approach to designing a headstage to conduct a multimodality brain monitoring in freely moving animals. To demonstrate this method, we have designed a user-configurable headstage for our micromachined multimodal neural probe. The headstage can reliably record direct-current electrocorticography (DC-ECoG), brain oxygen tension (PbrO2), cortical temperature, and regional cerebral blood flow (rCBF) simultaneously without significant signal crosstalk or movement artifacts for 72 h. Even in a noisy environment, it can record low-level neural signals with high quality. Moreover, it can easily interface with signal conditioning circuits that have high power consumption and are difficult to miniaturize. To the best of our knowledge, this is the first time where multiple physiological, biochemical, and electrophysiological cerebral variables have been simultaneously recorded from freely moving rats. We anticipate that the developed system will aid in gaining further insight into not only normal cerebral functioning but also pathophysiology of conditions such as epilepsy, stroke, and traumatic brain injury. PMID:27594826
Simultaneous measurement of glucose transport and utilization in the human brain
Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.
2011-01-01
Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMRglc) obtained from other tracer studies, such as 13C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state 1H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMRglc, this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain. PMID:21791622
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Xiaodong, E-mail: lxdctopone@sina.com; Ni, Lingqin; Hu, Wei
The objective of this study was to evaluate the dose conformity and feasibility of whole-brain radiotherapy with a simultaneous integrated boost by forward intensity-modulated radiation therapy in patients with 1 to 3 brain metastases. Forward intensity-modulated radiation therapy plans were generated for 10 patients with 1 to 3 brain metastases on Pinnacle 6.2 Treatment Planning System. The prescribed dose was 30 Gy to the whole brain (planning target volume [PTV]{sub wbrt}) and 40 Gy to individual brain metastases (PTV{sub boost}) simultaneously, and both doses were given in 10 fractions. The maximum diameters of individual brain metastases ranged from 1.6 tomore » 6 cm, and the summated PTVs per patient ranged from 1.62 to 69.81 cm{sup 3}. Conformity and feasibility were evaluated regarding conformation number and treatment delivery time. One hundred percent volume of the PTV{sub boost} received at least 95% of the prescribed dose in all cases. The maximum doses were less than 110% of the prescribed dose to the PTV{sub boost}, and all of the hot spots were within the PTV{sub boost}. The volume of the PTV{sub wbrt} that received at least 95% of the prescribed dose ranged from 99.2% to 100%. The mean values of conformation number were 0.682. The mean treatment delivery time was 2.79 minutes. Ten beams were used on an average in these plans. Whole-brain radiotherapy with a simultaneous integrated boost by forward intensity-modulated radiation therapy in 1 to 3 brain metastases is feasible, and treatment delivery time is short.« less
Limanowski, Jakub; Blankenburg, Felix
2016-03-02
The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual information. These brain areas thus likely integrate visual and proprioceptive information into a flexible multisensory body representation. Copyright © 2016 the authors 0270-6474/16/362582-08$15.00/0.
Fouragnan, Elsa; Retzler, Chris; Philiastides, Marios G
2018-03-25
Learning occurs when an outcome differs from expectations, generating a reward prediction error signal (RPE). The RPE signal has been hypothesized to simultaneously embody the valence of an outcome (better or worse than expected) and its surprise (how far from expectations). Nonetheless, growing evidence suggests that separate representations of the two RPE components exist in the human brain. Meta-analyses provide an opportunity to test this hypothesis and directly probe the extent to which the valence and surprise of the error signal are encoded in separate or overlapping networks. We carried out several meta-analyses on a large set of fMRI studies investigating the neural basis of RPE, locked at decision outcome. We identified two valence learning systems by pooling studies searching for differential neural activity in response to categorical positive-versus-negative outcomes. The first valence network (negative > positive) involved areas regulating alertness and switching behaviours such as the midcingulate cortex, the thalamus and the dorsolateral prefrontal cortex whereas the second valence network (positive > negative) encompassed regions of the human reward circuitry such as the ventral striatum and the ventromedial prefrontal cortex. We also found evidence of a largely distinct surprise-encoding network including the anterior cingulate cortex, anterior insula and dorsal striatum. Together with recent animal and electrophysiological evidence this meta-analysis points to a sequential and distributed encoding of different components of the RPE signal, with potentially distinct functional roles. © 2018 Wiley Periodicals, Inc.
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
NASA Astrophysics Data System (ADS)
Fekete, Z.; Csernai, M.; Kocsis, K.; Horváth, Á. C.; Pongrácz, A.; Barthó, P.
2017-06-01
Objective. Temperature is an important factor for neural function both in normal and pathological states, nevertheless, simultaneous monitoring of local brain temperature and neuronal activity has not yet been undertaken. Approach. In our work, we propose an implantable, calibrated multimodal biosensor that facilitates the complex investigation of thermal changes in both cortical and deep brain regions, which records multiunit activity of neuronal populations in mice. The fabricated neural probe contains four electrical recording sites and a platinum temperature sensor filament integrated on the same probe shaft within a distance of 30 µm from the closest recording site. The feasibility of the simultaneous functionality is presented in in vivo studies. The probe was tested in the thalamus of anesthetized mice while manipulating the core temperature of the animals. Main results. We obtained multiunit and local field recordings along with measurement of local brain temperature with accuracy of 0.14 °C. Brain temperature generally followed core body temperature, but also showed superimposed fluctuations corresponding to epochs of increased local neural activity. With the application of higher currents, we increased the local temperature by several degrees without observable tissue damage between 34-39 °C. Significance. The proposed multifunctional tool is envisioned to broaden our knowledge on the role of the thermal modulation of neuronal activity in both cortical and deeper brain regions.
Ludersdorfer, Philipp; Kronbichler, Martin; Wimmer, Heinz
2015-01-01
The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. Hum Brain Mapp, 36:1393–1406, 2015. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:25504890
A Common Neural Code for Perceived and Inferred Emotion
Saxe, Rebecca
2014-01-01
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. PMID:25429141
A common neural code for perceived and inferred emotion.
Skerry, Amy E; Saxe, Rebecca
2014-11-26
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. Copyright © 2014 the authors 0270-6474/14/3315997-12$15.00/0.
ERIC Educational Resources Information Center
Longo, Palma J.
A long-term study was conducted to test the effectiveness of visual thinking networking (VTN), a new generation of knowledge representation strategies with 56 ninth grade earth science students. The recent findings about the brain's organization and processing conceptually ground VTN as a new cognitive tool used by learners when making their…
Forms of Memory for Representation of Visual Objects
1991-04-15
neuropsychological syndromes that involve disruption of perceptual representation systems should pay rich dividends for implicit memory research (Schacter et al...BLACKORDi. 1988b. Deficits in the implicit retention of new associations by alcoholic Korsakoff patients. Brain and Cognition 7: 145-156. COFER, C. C...MOREINES & N. BUTTERS. 1973. Retrieving information from Korsakoff patients: Effects of categorical cues and reference to the task. Cortex 9: 165
Penetrating the Blood-Brain Barrier: Promise of Novel Nanoplatforms and Delivery Vehicles.
Ali, Iqbal Unnisa; Chen, Xiaoyuan
2015-10-27
Multifunctional nanoplatforms combining versatile therapeutic modalities with a variety of imaging options have the potential to diagnose, monitor, and treat brain diseases. The promise of nanotechnology can only be realized by the simultaneous development of innovative brain-targeting delivery vehicles capable of penetrating the blood-brain barrier without compromising its structural integrity.
The Relationship between Simultaneous-Successive Processing and Academic Achievement.
ERIC Educational Resources Information Center
Merritt, Frank M.; McCallum, Steve
The Luria-Das Information Processing Model of human learning holds that information is analysed and coded within the brain in either a simultaneous or a successive fashion. Simultaneous integration refers to the synthesis of separate elements into groups, often with spatial characteristics; successive integration means that information is…
1986-02-20
related brain potential at the Joint EEG Society/ ohp hysioogical Society (ERP) and measures of the electromyogram Meeting. Bristol (England), 1983. and...proving the memory representation of the task ( mem - manipulations of primary-task difficulty attenuated ory data limits). If the P300 amplitude does in
Perception of Lexical Stress by Brain-Damaged Individuals: Effects on Lexical-Semantic Activation
ERIC Educational Resources Information Center
Shah, Amee P.; Baum, Shari R.
2006-01-01
A semantic priming, lexical-decision study was conducted to examine the ability of left- and right-brain damaged individuals to perceive lexical-stress cues and map them onto lexical-semantic representations. Correctly and incorrectly stressed primes were paired with related and unrelated target words to tap implicit processing of lexical prosody.…
Measuring the representational space of music with fMRI: a case study with Sting.
Levitin, Daniel J; Grafton, Scott T
2016-12-01
Functional brain imaging has revealed much about the neuroanatomical substrates of higher cognition, including music, language, learning, and memory. The technique lends itself to studying of groups of individuals. In contrast, the nature of expert performance is typically studied through the examination of exceptional individuals using behavioral case studies and retrospective biography. Here, we combined fMRI and the study of an individual who is a world-class expert musician and composer in order to better understand the neural underpinnings of his music perception and cognition, in particular, his mental representations for music. We used state of the art multivoxel pattern analysis (MVPA) and representational dissimilarity analysis (RDA) in a fixed set of brain regions to test three exploratory hypotheses with the musician Sting: (1) Composing would recruit neutral structures that are both unique and distinguishable from other creative acts, such as composing prose or visual art; (2) listening and imagining music would recruit similar neural regions, indicating that musical memory shares anatomical substrates with music listening; (3) the MVPA and RDA results would help us to map the representational space for music, revealing which musical pieces and genres are perceived to be similar in the musician's mental models for music. Our hypotheses were confirmed. The act of composing, and even of imagining elements of the composed piece separately, such as melody and rhythm, activated a similar cluster of brain regions, and were distinct from prose and visual art. Listened and imagined music showed high similarity, and in addition, notable similarity/dissimilarity patterns emerged among the various pieces used as stimuli: Muzak and Top 100/Pop songs were far from all other musical styles in Mahalanobis distance (Euclidean representational space), whereas jazz, R&B, tango and rock were comparatively close. Closer inspection revealed principaled explanations for the similarity clusters found, based on key, tempo, motif, and orchestration.
Oswal, Ashwini; Jha, Ashwani; Neal, Spencer; Reid, Alphonso; Bradbury, David; Aston, Peter; Limousin, Patricia; Foltynie, Tom; Zrinzo, Ludvic; Brown, Peter; Litvak, Vladimir
2016-01-01
Background Deep Brain Stimulation (DBS) is an effective treatment for several neurological and psychiatric disorders. In order to gain insights into the therapeutic mechanisms of DBS and to advance future therapies a better understanding of the effects of DBS on large-scale brain networks is required. New method In this paper, we describe an experimental protocol and analysis pipeline for simultaneously performing DBS and intracranial local field potential (LFP) recordings at a target brain region during concurrent magnetoencephalography (MEG) measurement. Firstly we describe a phantom setup that allowed us to precisely characterise the MEG artefacts that occurred during DBS at clinical settings. Results Using the phantom recordings we demonstrate that with MEG beamforming it is possible to recover oscillatory activity synchronised to a reference channel, despite the presence of high amplitude artefacts evoked by DBS. Finally, we highlight the applicability of these methods by illustrating in a single patient with Parkinson's disease (PD), that changes in cortical-subthalamic nucleus coupling can be induced by DBS. Comparison with existing approaches To our knowledge this paper provides the first technical description of a recording and analysis pipeline for combining simultaneous cortical recordings using MEG, with intracranial LFP recordings of a target brain nucleus during DBS. PMID:26698227
de Gelder, B.
2016-01-01
The neural basis of emotion perception has mostly been investigated with single face or body stimuli. However, in daily life one may also encounter affective expressions by groups, e.g. an angry mob or an exhilarated concert crowd. In what way is brain activity modulated when several individuals express similar rather than different emotions? We investigated this question using an experimental design in which we presented two stimuli simultaneously, with same or different emotional expressions. We hypothesized that, in the case of two same-emotion stimuli, brain activity would be enhanced, while in the case of two different emotions, one emotion would interfere with the effect of the other. The results showed that the simultaneous perception of different affective body expressions leads to a deactivation of the amygdala and a reduction of cortical activity. It was revealed that the processing of fearful bodies, compared with different-emotion bodies, relied more strongly on saliency and action triggering regions in inferior parietal lobe and insula, while happy bodies drove the occipito-temporal cortex more strongly. We showed that this design could be used to uncover important differences between brain networks underlying fearful and happy emotions. The enhancement of brain activity for unambiguous affective signals expressed by several people simultaneously supports adaptive behaviour in critical situations. PMID:27025242
The neural representation of social networks.
Weaverdyck, Miriam E; Parkinson, Carolyn
2018-05-24
The computational demands associated with navigating large, complexly bonded social groups are thought to have significantly shaped human brain evolution. Yet, research on social network representation and cognitive neuroscience have progressed largely independently. Thus, little is known about how the human brain encodes the structure of the social networks in which it is embedded. This review highlights recent work seeking to bridge this gap in understanding. While the majority of research linking social network analysis and neuroimaging has focused on relating neuroanatomy to social network size, researchers have begun to define the neural architecture that encodes social network structure, cognitive and behavioral consequences of encoding this information, and individual differences in how people represent the structure of their social world. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mental Imagery: Functional Mechanisms and Clinical Applications
Pearson, Joel; Naselaris, Thomas; Holmes, Emily A.; Kosslyn, Stephen M.
2015-01-01
Mental imagery research has weathered both disbelief of the phenomenon and inherent methodological limitations. Here we review recent behavioral, brain imaging, and clinical research that has reshaped our understanding of mental imagery. Research supports the claim that visual mental imagery is a depictive internal representation that functions like a weak form of perception. Brain imaging work has demonstrated that neural representations of mental and perceptual images resemble one another as early as the primary visual cortex (V1). Activity patterns in V1 encode mental images and perceptual images via a common set of low-level depictive visual features. Recent translational and clinical research reveals the pivotal role that imagery plays in many mental disorders and suggests how clinicians can utilize imagery in treatment. PMID:26412097
Revealing representational content with pattern-information fMRI--an introductory guide.
Mur, Marieke; Bandettini, Peter A; Kriegeskorte, Nikolaus
2009-03-01
Conventional statistical analysis methods for functional magnetic resonance imaging (fMRI) data are very successful at detecting brain regions that are activated as a whole during specific mental activities. The overall activation of a region is usually taken to indicate involvement of the region in the task. However, such activation analysis does not consider the multivoxel patterns of activity within a brain region. These patterns of activity, which are thought to reflect neuronal population codes, can be investigated by pattern-information analysis. In this framework, a region's multivariate pattern information is taken to indicate representational content. This tutorial introduction motivates pattern-information analysis, explains its underlying assumptions, introduces the most widespread methods in an intuitive way, and outlines the basic sequence of analysis steps.
Martin, Alex
2016-01-01
In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed. PMID:25968087
Event-related potentials and secondary task performance during simulated driving.
Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L
2008-01-01
Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.
Representing delayed force feedback as a combination of current and delayed states.
Avraham, Guy; Mawase, Firas; Karniel, Amir; Shmuelof, Lior; Donchin, Opher; Mussa-Ivaldi, Ferdinando A; Nisky, Ilana
2017-10-01
To adapt to deterministic force perturbations that depend on the current state of the hand, internal representations are formed to capture the relationships between forces experienced and motion. However, information from multiple modalities travels at different rates, resulting in intermodal delays that require compensation for these internal representations to develop. To understand how these delays are represented by the brain, we presented participants with delayed velocity-dependent force fields, i.e., forces that depend on hand velocity either 70 or 100 ms beforehand. We probed the internal representation of these delayed forces by examining the forces the participants applied to cope with the perturbations. The findings showed that for both delayed forces, the best model of internal representation consisted of a delayed velocity and current position and velocity. We show that participants relied initially on the current state, but with adaptation, the contribution of the delayed representation to adaptation increased. After adaptation, when the participants were asked to make movements with a higher velocity for which they had not previously experienced with the delayed force field, they applied forces that were consistent with current position and velocity as well as delayed velocity representations. This suggests that the sensorimotor system represents delayed force feedback using current and delayed state information and that it uses this representation when generalizing to faster movements. NEW & NOTEWORTHY The brain compensates for forces in the body and the environment to control movements, but it is unclear how it does so given the inherent delays in information transmission and processing. We examined how participants cope with delayed forces that depend on their arm velocity 70 or 100 ms beforehand. After adaptation, participants applied opposing forces that revealed a partially correct representation of the perturbation using the current and the delayed information. Copyright © 2017 the American Physiological Society.
Brain-to-text: decoding spoken phrases from phone representations in the brain.
Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja
2015-01-01
It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.
Brain-to-text: decoding spoken phrases from phone representations in the brain
Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja
2015-01-01
It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702
Anderson, Andrew James; Lalor, Edmund C; Lin, Feng; Binder, Jeffrey R; Fernandino, Leonardo; Humphries, Colin J; Conant, Lisa L; Raizada, Rajeev D S; Grimm, Scott; Wang, Xixi
2018-05-16
Deciphering how sentence meaning is represented in the brain remains a major challenge to science. Semantically related neural activity has recently been shown to arise concurrently in distributed brain regions as successive words in a sentence are read. However, what semantic content is represented by different regions, what is common across them, and how this relates to words in different grammatical positions of sentences is weakly understood. To address these questions, we apply a semantic model of word meaning to interpret brain activation patterns elicited in sentence reading. The model is based on human ratings of 65 sensory/motor/emotional and cognitive features of experience with words (and their referents). Through a process of mapping functional Magnetic Resonance Imaging activation back into model space we test: which brain regions semantically encode content words in different grammatical positions (e.g., subject/verb/object); and what semantic features are encoded by different regions. In left temporal, inferior parietal, and inferior/superior frontal regions we detect the semantic encoding of words in all grammatical positions tested and reveal multiple common components of semantic representation. This suggests that sentence comprehension involves a common core representation of multiple words' meaning being encoded in a network of regions distributed across the brain.
Abstract representations of associated emotions in the human brain.
Kim, Junsuk; Schultz, Johannes; Rohe, Tim; Wallraven, Christian; Lee, Seong-Whan; Bülthoff, Heinrich H
2015-04-08
Emotions can be aroused by various kinds of stimulus modalities. Recent neuroimaging studies indicate that several brain regions represent emotions at an abstract level, i.e., independently from the sensory cues from which they are perceived (e.g., face, body, or voice stimuli). If emotions are indeed represented at such an abstract level, then these abstract representations should also be activated by the memory of an emotional event. We tested this hypothesis by asking human participants to learn associations between emotional stimuli (videos of faces or bodies) and non-emotional stimuli (fractals). After successful learning, fMRI signals were recorded during the presentations of emotional stimuli and emotion-associated fractals. We tested whether emotions could be decoded from fMRI signals evoked by the fractal stimuli using a classifier trained on the responses to the emotional stimuli (and vice versa). This was implemented as a whole-brain searchlight, multivoxel activation pattern analysis, which revealed successful emotion decoding in four brain regions: posterior cingulate cortex (PCC), precuneus, MPFC, and angular gyrus. The same analysis run only on responses to emotional stimuli revealed clusters in PCC, precuneus, and MPFC. Multidimensional scaling analysis of the activation patterns revealed clear clustering of responses by emotion across stimulus types. Our results suggest that PCC, precuneus, and MPFC contain representations of emotions that can be evoked by stimuli that carry emotional information themselves or by stimuli that evoke memories of emotional stimuli, while angular gyrus is more likely to take part in emotional memory retrieval. Copyright © 2015 the authors 0270-6474/15/355655-09$15.00/0.
Modeling Functional Neuroanatomy for an Anatomy Information System
Niggemann, Jörg M.; Gebert, Andreas; Schulz, Stefan
2008-01-01
Objective Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the “internal wiring” of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. Design The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. Measurements The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Results Internal wiring as well as functional pathways can correctly be represented and tracked. Conclusion This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems. PMID:18579841
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images.
Zhang, Lefei; Zhang, Qian; Du, Bo; Huang, Xin; Tang, Yuan Yan; Tao, Dacheng
2018-01-01
In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.
NASA Astrophysics Data System (ADS)
Doan, Bich-Thuy; Autret, Gwennhael; Mispelter, Joël; Méric, Philippe; Même, William; Montécot-Dubourg, Céline; Corrèze, Jean-Loup; Szeremeta, Frédéric; Gillet, Brigitte; Beloeil, Jean-Claude
2009-05-01
13C spectroscopy combined with the injection of 13C-labeled substrates is a powerful method for the study of brain metabolism in vivo. Since highly localized measurements are required in a heterogeneous organ such as the brain, it is of interest to augment the sensitivity of 13C spectroscopy by proton acquisition. Furthermore, as focal cerebral lesions are often encountered in animal models of disorders in which the two brain hemispheres are compared, we wished to develop a bi-voxel localized sequence for the simultaneous bilateral investigation of rat brain metabolism, with no need for external additional references. Two sequences were developed at 9.4 T: a bi-voxel 1H-( 13C) STEAM-POCE (Proton Observed Carbon Edited) sequence and a bi-voxel 1H-( 13C) PRESS-POCE adiabatically decoupled sequence with Hadamard encoding. Hadamard encoding allows both voxels to be recorded simultaneously, with the same acquisition time as that required for a single voxel. The method was validated in a biological investigation into the neuronal damage and the effect on the Tri Carboxylic Acid cycle in localized excitotoxic lesions. Following an excitotoxic quinolinate-induced localized lesion in the rat cortex and the infusion of U- 13C glucose, two 1H-( 13C) spectra of distinct (4 × 4 × 4 mm 3) voxels, one centred on the injured hemisphere and the other on the contralateral hemisphere, were recorded simultaneously. Two 1H bi-voxel spectra were also recorded and showed a significant decrease in N-acetyl aspartate, and an accumulation of lactate in the ipsilateral hemisphere. The 1H-( 13C) spectra could be recorded dynamically as a function of time, and showed a fall in the glutamate/glutamine ratio and the presence of a stable glutamine pool, with a permanent increase of lactate in the ipsilateral hemisphere. This bi-voxel 1H-( 13C) method can be used to investigate simultaneously both brain hemispheres, and to perform dynamic studies. We report here the neuronal damage and the effect on the Tri Carboxylic Acid cycle in localized excitotoxic lesions.
Measuring Asymmetric Interactions in Resting State Brain Networks*
Joshi, Anand A.; Salloum, Ronald; Bhushan, Chitresh; Leahy, Richard M.
2015-01-01
Directed graph representations of brain networks are increasingly being used in brain image analysis to indicate the direction and level of influence among brain regions. Most of the existing techniques for directed graph representations are based on time series analysis and the concept of causality, and use time lag information in the brain signals. These time lag-based techniques can be inadequate for functional magnetic resonance imaging (fMRI) signal analysis due to the limited time resolution of fMRI as well as the low frequency hemodynamic response. The aim of this paper is to present a novel measure of necessity that uses asymmetry in the joint distribution of brain activations to infer the direction and level of interaction among brain regions. We present a mathematical formula for computing necessity and extend this measure to partial necessity, which can potentially distinguish between direct and indirect interactions. These measures do not depend on time lag for directed modeling of brain interactions and therefore are more suitable for fMRI signal analysis. The necessity measures were used to analyze resting state fMRI data to determine the presence of hierarchy and asymmetry of brain interactions during resting state. We performed ROI-wise analysis using the proposed necessity measures to study the default mode network. The empirical joint distribution of the fMRI signals was determined using kernel density estimation, and was used for computation of the necessity and partial necessity measures. The significance of these measures was determined using a one-sided Wilcoxon rank-sum test. Our results are consistent with the hypothesis that the posterior cingulate cortex plays a central role in the default mode network. PMID:26221690
On Cognition, Structured Sequence Processing, and Adaptive Dynamical Systems
NASA Astrophysics Data System (ADS)
Petersson, Karl Magnus
2008-11-01
Cognitive neuroscience approaches the brain as a cognitive system: a system that functionally is conceptualized in terms of information processing. We outline some aspects of this concept and consider a physical system to be an information processing device when a subclass of its physical states can be viewed as representational/cognitive and transitions between these can be conceptualized as a process operating on these states by implementing operations on the corresponding representational structures. We identify a generic and fundamental problem in cognition: sequentially organized structured processing. Structured sequence processing provides the brain, in an essential sense, with its processing logic. In an approach addressing this problem, we illustrate how to integrate levels of analysis within a framework of adaptive dynamical systems. We note that the dynamical system framework lends itself to a description of asynchronous event-driven devices, which is likely to be important in cognition because the brain appears to be an asynchronous processing system. We use the human language faculty and natural language processing as a concrete example through out.
What Should Be the Roles of Conscious States and Brain States in Theories of Mental Activity?**
Dulany, Donelson E.
2011-01-01
Answers to the title’s question have been influenced by a history in which an early science of consciousness was rejected by behaviourists on the argument that this entails commitment to ontological dualism and “free will” in the sense of indeterminism. This is, however, a confusion of theoretical assertions with metaphysical assertions. Nevertheless, a legacy within computational and information-processing views of mind rejects or de-emphasises a role for consciousness. This paper sketches a mentalistic metatheory in which conscious states are the sole carriers of symbolic representations, and thus have a central role in the explanation of mental activity and action-while specifying determinism and materialism as useful working assumptions. A mentalistic theory of causal learning, experimentally examined with phenomenal reports, is followed by examination of these questions: Are there common roles for phenomenal reports and brain imaging? Is there defensible evidence for unconscious brain states carrying symbolic representations? Are there interesting dissociations within consciousness? PMID:21694964
Simultaneous measurement of glucose transport and utilization in the human brain.
Shestov, Alexander A; Emir, Uzay E; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R; Öz, Gülin
2011-11-01
Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, K(M)(t) and V(max)(t), in humans have so far been obtained by measuring steady-state brain glucose levels by proton ((1)H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMR(glc)) obtained from other tracer studies, such as (13)C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state (1)H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMR(glc), this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain.
Zhou, Haibo; Liu, Junlai; Zhou, Changyang; Gao, Ni; Rao, Zhiping; Li, He; Hu, Xinde; Li, Changlin; Yao, Xuan; Shen, Xiaowen; Sun, Yidi; Wei, Yu; Liu, Fei; Ying, Wenqin; Zhang, Junming; Tang, Cheng; Zhang, Xu; Xu, Huatai; Shi, Linyu; Cheng, Leping; Huang, Pengyu; Yang, Hui
2018-03-01
Despite rapid progresses in the genome-editing field, in vivo simultaneous overexpression of multiple genes remains challenging. We generated a transgenic mouse using an improved dCas9 system that enables simultaneous and precise in vivo transcriptional activation of multiple genes and long noncoding RNAs in the nervous system. As proof of concept, we were able to use targeted activation of endogenous neurogenic genes in these transgenic mice to directly and efficiently convert astrocytes into functional neurons in vivo. This system provides a flexible and rapid screening platform for studying complex gene networks and gain-of-function phenotypes in the mammalian brain.
Xue, Songchao; Gong, Hui; Jiang, Tao; Luo, Weihua; Meng, Yuanzheng; Liu, Qian; Chen, Shangbin; Li, Anan
2014-01-01
The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously. PMID:24498247
Strube-Bloss, Martin F.; Herrera-Valdez, Marco A.; Smith, Brian H.
2012-01-01
Neural representations of odors are subject to computations that involve sequentially convergent and divergent anatomical connections across different areas of the brains in both mammals and insects. Furthermore, in both mammals and insects higher order brain areas are connected via feedback connections. In order to understand the transformations and interactions that this connectivity make possible, an ideal experiment would compare neural responses across different, sequential processing levels. Here we present results of recordings from a first order olfactory neuropile – the antennal lobe (AL) – and a higher order multimodal integration and learning center – the mushroom body (MB) – in the honey bee brain. We recorded projection neurons (PN) of the AL and extrinsic neurons (EN) of the MB, which provide the outputs from the two neuropils. Recordings at each level were made in different animals in some experiments and simultaneously in the same animal in others. We presented two odors and their mixture to compare odor response dynamics as well as classification speed and accuracy at each neural processing level. Surprisingly, the EN ensemble significantly starts separating odor stimuli rapidly and before the PN ensemble has reached significant separation. Furthermore the EN ensemble at the MB output reaches a maximum separation of odors between 84–120 ms after odor onset, which is 26 to 133 ms faster than the maximum separation at the AL output ensemble two synapses earlier in processing. It is likely that a subset of very fast PNs, which respond before the ENs, may initiate the rapid EN ensemble response. We suggest therefore that the timing of the EN ensemble activity would allow retroactive integration of its signal into the ongoing computation of the AL via centrifugal feedback. PMID:23209711
Ko, Chih-Hung; Liu, Gin-Chung; Yen, Ju-Yu; Yen, Cheng-Fang; Chen, Cheng-Sheng; Lin, Wei-Chen
2013-04-01
Internet gaming addiction (IGA) has been classified as an addictive disorder in the proposed DSM 5 draft. However, whether its underlying addiction mechanism is similar to other substance use disorders has not been confirmed. The present functional magnetic resonance images study is aimed at evaluating the brain correlates of cue-induced gaming urge or smoking craving in subjects with both IGA and nicotine dependence to make a simultaneous comparison of cue induced brain reactivity for gaming and smoking. For this purpose, 16 subjects with both IGA and nicotine dependence (comorbid group) and 16 controls were recruited from the community. All subjects were made to undergo 3-T fMRIs scans while viewing images associated with online games, smoking, and neutral images, which were arranged according to an event-related design. The resultant image data was analyzed with full factorial and conjunction analysis of SPM5. The results demonstrate that anterior cingulate, and parahippocampus activates higher for both cue-induced gaming urge and smoking craving among the comorbid group in comparison to the control group. The conjunction analysis demonstrates that bilateral parahippocampal gyrus activates to a greater degree for both gaming urge and smoking craving among the comorbid group in comparison to the control group. Accordingly, the study demonstrates that both IGA and nicotine dependence share similar mechanisms of cue-induced reactivity over the fronto-limbic network, particularly for the parahippocampus. The results support that the context representation provided by the parahippocampus is a key mechanism for not only cue-induced smoking craving, but also for cue-induced gaming urge. Copyright © 2012 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Several biomarkers have been individually associated with vascular brain injury, but no prior study has explored the simultaneous association of a biologically plausible panel of biomarkers with the incidence of stroke/transient ischemic attack and the prevalence of subclinical brain injury. In 3127...
Brain Network Interactions in Auditory, Visual and Linguistic Processing
ERIC Educational Resources Information Center
Horwitz, Barry; Braun, Allen R.
2004-01-01
In the paper, we discuss the importance of network interactions between brain regions in mediating performance of sensorimotor and cognitive tasks, including those associated with language processing. Functional neuroimaging, especially PET and fMRI, provide data that are obtained essentially simultaneously from much of the brain, and thus are…
The representation of order information in auditory-verbal short-term memory.
Kalm, Kristjan; Norris, Dennis
2014-05-14
Here we investigate how order information is represented in auditory-verbal short-term memory (STM). We used fMRI and a serial recall task to dissociate neural activity patterns representing the phonological properties of the items stored in STM from the patterns representing their order. For this purpose, we analyzed fMRI activity patterns elicited by different item sets and different orderings of those items. These fMRI activity patterns were compared with the predictions made by positional and chaining models of serial order. The positional models encode associations between items and their positions in a sequence, whereas the chaining models encode associations between successive items and retain no position information. We show that a set of brain areas in the postero-dorsal stream of auditory processing store associations between items and order as predicted by a positional model. The chaining model of order representation generates a different pattern similarity prediction, which was shown to be inconsistent with the fMRI data. Our results thus favor a neural model of order representation that stores item codes, position codes, and the mapping between them. This study provides the first fMRI evidence for a specific model of order representation in the human brain. Copyright © 2014 the authors 0270-6474/14/346879-08$15.00/0.
Spatial attention determines the nature of nonverbal number representation.
Hyde, Daniel C; Wood, Justin N
2011-09-01
Coordinated studies of adults, infants, and nonhuman animals provide evidence for two systems of nonverbal number representation: a "parallel individuation" system that represents individual items and a "numerical magnitude" system that represents the approximate cardinal value of a group. However, there is considerable debate about the nature and functions of these systems, due largely to the fact that some studies show a dissociation between small (1-3) and large (>3) number representation, whereas others do not. Using event-related potentials, we show that it is possible to determine which system will represent the numerical value of a small number set (1-3 items) by manipulating spatial attention. Specifically, when attention can select individual objects, an early brain response (N1) scales with the cardinal value of the display, the signature of parallel individuation. In contrast, when attention cannot select individual objects or is occupied by another task, a later brain response (P2p) scales with ratio, the signature of the approximate numerical magnitude system. These results provide neural evidence that small numbers can be represented as approximate numerical magnitudes. Further, they empirically demonstrate the importance of early attentional processes to number representation by showing that the way in which attention disperses across a scene determines which numerical system will deploy in a given context.
The Representation of Object-Directed Action and Function Knowledge in the Human Brain
Chen, Quanjing; Garcea, Frank E.; Mahon, Bradford Z.
2016-01-01
The appropriate use of everyday objects requires the integration of action and function knowledge. Previous research suggests that action knowledge is represented in frontoparietal areas while function knowledge is represented in temporal lobe regions. Here we used multivoxel pattern analysis to investigate the representation of object-directed action and function knowledge while participants executed pantomimes of familiar tool actions. A novel approach for decoding object knowledge was used in which classifiers were trained on one pair of objects and then tested on a distinct pair; this permitted a measurement of classification accuracy over and above object-specific information. Region of interest (ROI) analyses showed that object-directed actions could be decoded in tool-preferring regions of both parietal and temporal cortex, while no independently defined tool-preferring ROI showed successful decoding of object function. However, a whole-brain searchlight analysis revealed that while frontoparietal motor and peri-motor regions are engaged in the representation of object-directed actions, medial temporal lobe areas in the left hemisphere are involved in the representation of function knowledge. These results indicate that both action and function knowledge are represented in a topographically coherent manner that is amenable to study with multivariate approaches, and that the left medial temporal cortex represents knowledge of object function. PMID:25595179
Carey, Daniel; Miquel, Marc E.; Evans, Bronwen G.; Adank, Patti; McGettigan, Carolyn
2017-01-01
Abstract Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants’ vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST. PMID:28334401
Carey, Daniel; Miquel, Marc E; Evans, Bronwen G; Adank, Patti; McGettigan, Carolyn
2017-05-01
Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants' vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST. © The Author 2017. Published by Oxford University Press.
Multisensory decisions provide support for probabilistic number representations.
Kanitscheider, Ingmar; Brown, Amanda; Pouget, Alexandre; Churchland, Anne K
2015-06-01
A large body of evidence suggests that an approximate number sense allows humans to estimate numerosity in sensory scenes. This ability is widely observed in humans, including those without formal mathematical training. Despite this, many outstanding questions remain about the nature of the numerosity representation in the brain. Specifically, it is not known whether approximate numbers are represented as scalar estimates of numerosity or, alternatively, as probability distributions over numerosity. In the present study, we used a multisensory decision task to distinguish these possibilities. We trained human subjects to decide whether a test stimulus had a larger or smaller numerosity compared with a fixed reference. Depending on the trial, the numerosity was presented as either a sequence of visual flashes or a sequence of auditory tones, or both. To test for a probabilistic representation, we varied the reliability of the stimulus by adding noise to the visual stimuli. In accordance with a probabilistic representation, we observed a significant improvement in multisensory compared with unisensory trials. Furthermore, a trial-by-trial analysis revealed that although individual subjects showed strategic differences in how they leveraged auditory and visual information, all subjects exploited the reliability of unisensory cues. An alternative, nonprobabilistic model, in which subjects combined cues without regard for reliability, was not able to account for these trial-by-trial choices. These findings provide evidence that the brain relies on a probabilistic representation for numerosity decisions. Copyright © 2015 the American Physiological Society.
Inborn and experience-dependent models of categorical brain organization. A position paper
Gainotti, Guido
2015-01-01
The present review aims to summarize the debate in contemporary neuroscience between inborn and experience-dependent models of conceptual representations that goes back to the description of category-specific semantic disorders for biological and artifact categories. Experience-dependent models suggest that categorical disorders are the by-product of the differential weighting of different sources of knowledge in the representation of biological and artifact categories. These models maintain that semantic disorders are not really category-specific, because they do not respect the boundaries between different categories. They also argue that the brain structures which are disrupted in a given type of category-specific semantic disorder should correspond to the areas of convergence of the sensory-motor information which play a major role in the construction of that category. Furthermore, they provide a simple interpretation of gender-related categorical effects and are supported by studies assessing the importance of prior experience in the cortical representation of objects On the other hand, inborn models maintain that category-specific semantic disorders reflect the disruption of innate brain networks, which are shaped by natural selection to allow rapid identification of objects that are very relevant for survival. From the empirical point of view, these models are mainly supported by observations of blind subjects, which suggest that visual experience is not necessary for the emergence of category-specificity in the ventral stream of visual processing. The weight of the data supporting experience-dependent and inborn models is thoroughly discussed, stressing the fact observations made in blind subjects are still the subject of intense debate. It is concluded that at the present state of knowledge it is not possible to choose between experience-dependent and inborn models of conceptual representations. PMID:25667570
Rachmiel, M; Cohen, M; Heymen, E; Lezinger, M; Inbar, D; Gilat, S; Bistritzer, T; Leshem, G; Kan-Dror, E; Lahat, E; Ekstein, D
2016-02-01
To assess the association between hyperglycemia and electrical brain activity in type 1 diabetes mellitus (T1DM). Nine youths with T1DM were monitored simultaneously and continuously by EEG and continuous glucose monitor system, for 40 h. EEG powers of 0.5-80 Hz frequency bands in all the different brain regions were analyzed according to interstitial glucose concentration (IGC) ranges of 4-11 mmol/l, 11-15.5 mmol/l and >15.5 mmol/l. Analysis of variance was used to examine the differences in EEG power of each frequency band between the subgroups of IGC. Analysis was performed separately during wakefulness and sleep, controlling for age, gender and HbA1c. Mean IGC was 11.49 ± 5.26 mmol/l in 1253 combined measurements. IGC>15.5 mmol/l compared to 4-11 mmol/l was associated during wakefulness with increased EEG power of low frequencies and with decreased EEG power of high frequencies. During sleep, it was associated with increased EEG power of low frequencies in all brain areas and of high frequencies in frontal and central areas. Asymptomatic transient hyperglycemia in youth with T1DM is associated with simultaneous alterations in electrical brain activity during wakefulness and sleep. The clinical implications of immediate electrical brain alterations under hyperglycemia need to be studied and may lead to adaptations of management. Copyright © 2015. Published by Elsevier Ireland Ltd.
NASA Astrophysics Data System (ADS)
Chaudhary, Ujwal; Thompson, Bryant; Gonzalez, Jean; Jung, Young-Jin; Davis, Jennifer; Gonzalez, Patricia; Rice, Kyle; Bloyer, Martha; Elbaum, Leonard; Godavarty, Anuradha
2013-03-01
Cerebral palsy (CP) is a term that describes a group of motor impairment syndromes secondary to genetic and/or acquired disorders of the developing brain. In the current study, NIRS and motion capture were used simultaneously to correlate the brain's planning and execution activity during and with arm movement in healthy individual. The prefrontal region of the brain is non-invasively imaged using a custom built continuous-wave based near infrared spectroscopy (NIRS) system. The kinematics of the arm movement during the studies is recorded using an infrared based motion capture system, Qualisys. During the study, the subjects (over 18 years) performed 30 sec of arm movement followed by 30 sec rest for 5 times, both with their dominant and non-dominant arm. The optical signal acquired from NIRS system was processed to elucidate the activation and lateralization in the prefrontal region of participants. The preliminary results show difference, in terms of change in optical response, between task and rest in healthy adults. Currently simultaneous NIRS imaging and kinematics data are acquired in healthy individual and individual with CP in order to correlate brain activity to arm movement in real-time. The study has significant implication in elucidating the evolution in the functional activity of the brain as the physical movement of the arm evolves using NIRS. Hence the study has potential in augmenting the designing of training and hence rehabilitation regime for individuals with CP via kinematic monitoring and imaging brain activity.
Selective alignment of brain responses by task demands during semantic processing.
Baggio, Giosuè
2012-04-01
The way the brain binds together words to form sentences may depend on whether and how the arising cognitive representation is to be used in behavior. The amplitude of the N400 effect in event-related brain potentials is inversely correlated with the degree of fit of a word's meaning into a semantic representation of the preceding discourse. This study reports a double dissociation in the latency characteristics of the N400 effect depending on task demands. When participants silently read words in a sentence context, without issuing a relevant overt response, greater temporal alignment over recording sites occurs for N400 onsets than peaks. If however a behavior is produced - here pressing a button in a binary probe selection task - exactly the opposite pattern is observed, with stronger alignment of N400 peaks than onsets. The peak amplitude of the N400 effect correlates best with the latency characteristic showing less temporal dispersion. These findings suggest that meaning construction in the brain is subtly affected by task demands, and that there is complex functional integration between semantic combinatorics and control systems handling behavioral goals. Copyright © 2012 Elsevier Ltd. All rights reserved.
Trumpp, Natalie M; Traub, Felix; Pulvermüller, Friedemann; Kiefer, Markus
2014-02-01
Classical theories of semantic memory assume that concepts are represented in a unitary amodal memory system. In challenging this classical view, pure or hybrid modality-specific theories propose that conceptual representations are grounded in the sensory-motor brain areas, which typically process sensory and action-related information. Although neuroimaging studies provided evidence for a functional-anatomical link between conceptual processing of sensory or action-related features and the sensory-motor brain systems, it has been argued that aspects of such sensory-motor activation may not directly reflect conceptual processing but rather strategic imagery or postconceptual elaboration. In the present ERP study, we investigated masked effects of acoustic and action-related conceptual features to probe unconscious automatic conceptual processing in isolation. Subliminal feature-specific ERP effects at frontocentral electrodes were observed, which differed with regard to polarity, topography, and underlying brain electrical sources in congruency with earlier findings under conscious viewing conditions. These findings suggest that conceptual acoustic and action representations can also be unconsciously accessed, thereby excluding any postconceptual strategic processes. This study therefore further substantiates a grounding of conceptual and semantic processing in action and perception.
Shtyrov, Yury; MacGregor, Lucy J
2016-05-24
Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.
Magnani, Barbara; Frassinetti, Francesca; Ditye, Thomas; Oliveri, Massimiliano; Costantini, Marcello; Walsh, Vincent
2014-05-15
Prismatic adaptation (PA) has been shown to affect left-to-right spatial representations of temporal durations. A leftward aftereffect usually distorts time representation toward an underestimation, while rightward aftereffect usually results in an overestimation of temporal durations. Here, we used functional magnetic resonance imaging (fMRI) to study the neural mechanisms that underlie PA effects on time perception. Additionally, we investigated whether the effect of PA on time is transient or stable and, in the case of stability, which cortical areas are responsible of its maintenance. Functional brain images were acquired while participants (n=17) performed a time reproduction task and a control-task before, immediately after and 30 min after PA inducing a leftward aftereffect, administered outside the scanner. The leftward aftereffect induced an underestimation of time intervals that lasted for at least 30 min. The left anterior insula and the left superior frontal gyrus showed increased functional activation immediately after versus before PA in the time versus the control-task, suggesting these brain areas to be involved in the executive spatial manipulation of the representation of time. The left middle frontal gyrus showed an increase of activation after 30 min with respect to before PA. This suggests that this brain region may play a key role in the maintenance of the PA effect over time. Copyright © 2014. Published by Elsevier Inc.
Motor and linguistic linking of space and time in the cerebellum.
Oliveri, Massimiliano; Bonnì, Sonia; Turriziani, Patrizia; Koch, Giacomo; Lo Gerfo, Emanuele; Torriero, Sara; Vicario, Carmelo Mario; Petrosini, Laura; Caltagirone, Carlo
2009-11-20
Recent literature documented the presence of spatial-temporal interactions in the human brain. The aim of the present study was to verify whether representation of past and future is also mapped onto spatial representations and whether the cerebellum may be a neural substrate for linking space and time in the linguistic domain. We asked whether processing of the tense of a verb is influenced by the space where response takes place and by the semantics of the verb. Responses to past tense were facilitated in the left space while responses to future tense were facilitated in the right space. Repetitive transcranial magnetic stimulation (rTMS) of the right cerebellum selectively slowed down responses to future tense of action verbs; rTMS of both cerebellar hemispheres decreased accuracy of responses to past tense in the left space and to future tense in the right space for non-verbs, and to future tense in the right space for state verbs. The results suggest that representation of past and future is mapped onto spatial formats and that motor action could represent the link between spatial and temporal dimensions. Right cerebellar, left motor brain networks could be part of the prospective brain, whose primary function is to use past experiences to anticipate future events. Both cerebellar hemispheres could play a role in establishing the grammatical rules for verb conjugation.
Wallwork, Sarah B; Bellan, Valeria; Catley, Mark J; Moseley, G Lorimer
2016-08-01
Neural representations, or neurotags, refer to the idea that networks of brain cells, distributed across multiple brain areas, work in synergy to produce outputs. The brain can be considered then, a complex array of neurotags, each influencing and being influenced by each other. The output of some neurotags act on other systems, for example, movement, or on consciousness, for example, pain. This concept of neurotags has sparked a new body of research into pain and rehabilitation. We draw on this research and the concept of a cortical body matrix-a network of representations that subserves the regulation and protection of the body and the space around it-to suggest important implications for rehabilitation of sports injury and for sports performance. Protective behaviours associated with pain have been reinterpreted in light of these conceptual models. With a particular focus on rehabilitation of the injured athlete, this review presents the theoretical underpinnings of the cortical body matrix and its application within the sporting context. Therapeutic approaches based on these ideas are discussed and the efficacy of the most tested approaches is addressed. By integrating current thought in pain and cognitive neuroscience related to sports rehabilitation, recommendations for clinical practice and future research are suggested. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Missouri University Multi-Plane Imager (MUMPI): A high sensitivity rapid dynamic ECT brain imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, K.W.; Holmes, R.A.
1984-01-01
The authors have designed a unique ECT imaging device that can record rapid dynamic images of brain perfusion. The Missouri University Multi-Plane Imager (MUMPI) uses a single crystal detector that produces four orthogonal two-dimensional images simultaneously. Multiple slice images are reconstructed from counts recorded from stepwise or continuous collimator rotation. Four simultaneous 2-d image fields may also be recorded and reviewed. The cylindrical sodium iodide crystal and the rotating collimator concentrically surround the source volume being imaged with the collimator the only moving part. The design and function parameters of MUMPI have been compared to other competitive tomographic head imagingmore » devices. MUMPI's principal advantages are: 1) simultaneous direct acquisition of four two-dimensional images; 2) extremely rapid project set acquisition for ECT reconstruction; and 3) instrument practicality and economy due to single detector design and the absence of heavy mechanical moving components (only collimator rotation is required). MUMPI should be ideal for imaging neutral lipophilic chelates such as Tc-99m-PnAO which passively diffuses across the intact blood-brain-barrier and rapidly clears from brain tissue.« less
Hsu, Nina S; Jaeggi, Susanne M; Novick, Jared M
2017-03-01
Regions within the left inferior frontal gyrus (LIFG) have simultaneously been implicated in syntactic processing and cognitive control. Accounts attempting to unify LIFG's function hypothesize that, during comprehension, cognitive control resolves conflict between incompatible representations of sentence meaning. Some studies demonstrate co-localized activity within LIFG for syntactic and non-syntactic conflict resolution, suggesting domain-generality, but others show non-overlapping activity, suggesting domain-specific cognitive control and/or regions that respond uniquely to syntax. We propose however that examining exclusive activation sites for certain contrasts creates a false dichotomy: both domain-general and domain-specific neural machinery must coordinate to facilitate conflict resolution across domains. Here, subjects completed four diverse tasks involving conflict -one syntactic, three non-syntactic- while undergoing fMRI. Though LIFG consistently activated within individuals during conflict processing, functional connectivity analyses revealed task-specific coordination with distinct brain networks. Thus, LIFG may function as a conflict-resolution "hub" that cooperates with specialized neural systems according to information content. Copyright © 2016 Elsevier Inc. All rights reserved.
Phase transitions in Pareto optimal complex networks
NASA Astrophysics Data System (ADS)
Seoane, Luís F.; Solé, Ricard
2015-09-01
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Oculomotor responses and visuospatial perceptual judgments compete for common limited resources
Tibber, Marc S.; Grant, Simon; Morgan, Michael J.
2010-01-01
While there is evidence for multiple spatial and attentional maps in the brain it is not clear to what extent visuoperceptual and oculomotor tasks rely on common neural representations and attentional mechanisms. Using a dual-task interference paradigm we tested the hypothesis that eye movements and perceptual judgments made to simultaneously presented visuospatial information compete for shared limited resources. Observers undertook judgments of stimulus collinearity (perceptual extrapolation) using a pointer and Gabor patch and/or performed saccades to a peripheral dot target while their eye movements were recorded. In addition, observers performed a non-spatial control task (contrast discrimination), matched for task difficulty and stimulus structure, which on the basis of previous studies was expected to represent a lesser load on putative shared resources. Greater mutual interference was indeed found between the saccade and extrapolation task pair than between the saccade and contrast discrimination task pair. These data are consistent with visuoperceptual and oculomotor responses competing for common limited resources as well as spatial tasks incurring a relatively high attentional cost. PMID:20053112
Expanding the primate body schema in sensorimotor cortex by virtual touches of an avatar.
Shokur, Solaiman; O'Doherty, Joseph E; Winans, Jesse A; Bleuler, Hannes; Lebedev, Mikhail A; Nicolelis, Miguel A L
2013-09-10
The brain representation of the body, called the body schema, is susceptible to plasticity. For instance, subjects experiencing a rubber hand illusion develop a sense of ownership of a mannequin hand when they view it being touched while tactile stimuli are simultaneously applied to their own hand. Here, the cortical basis of such an embodiment was investigated through concurrent recordings from primary somatosensory (i.e., S1) and motor (i.e., M1) cortical neuronal ensembles while two monkeys observed an avatar arm being touched by a virtual ball. Following a period when virtual touches occurred synchronously with physical brushes of the monkeys' arms, neurons in S1 and M1 started to respond to virtual touches applied alone. Responses to virtual touch occurred 50 to 70 ms later than to physical touch, consistent with the involvement of polysynaptic pathways linking the visual cortex to S1 and M1. We propose that S1 and M1 contribute to the rubber hand illusion and that, by taking advantage of plasticity in these areas, patients may assimilate neuroprosthetic limbs as parts of their body schema.
NASA Astrophysics Data System (ADS)
Menz, Veera Katharina; Schaffelhofer, Stefan; Scherberger, Hansjörg
2015-10-01
Objective. In the last decade, multiple brain areas have been investigated with respect to their decoding capability of continuous arm or hand movements. So far, these studies have mainly focused on motor or premotor areas like M1 and F5. However, there is accumulating evidence that anterior intraparietal area (AIP) in the parietal cortex also contains information about continuous movement. Approach. In this study, we decoded 27 degrees of freedom representing complete hand and arm kinematics during a delayed grasping task from simultaneously recorded activity in areas M1, F5, and AIP of two macaque monkeys (Macaca mulatta). Main results. We found that all three areas provided decoding performances that lay significantly above chance. In particular, M1 yielded highest decoding accuracy followed by F5 and AIP. Furthermore, we provide support for the notion that AIP does not only code categorical visual features of objects to be grasped, but also contains a substantial amount of temporal kinematic information. Significance. This fact could be utilized in future developments of neural interfaces restoring hand and arm movements.
Motor resonance facilitates movement execution: an ERP and kinematic study
Ménoret, Mathilde; Curie, Aurore; des Portes, Vincent; Nazir, Tatjana A.; Paulignan, Yves
2013-01-01
Action observation, simulation and execution share neural mechanisms that allow for a common motor representation. It is known that when these overlapping mechanisms are simultaneously activated by action observation and execution, motor performance is influenced by observation and vice versa. To understand the neural dynamics underlying this influence and to measure how variations in brain activity impact the precise kinematics of motor behavior, we coupled kinematics and electrophysiological recordings of participants while they performed and observed congruent or non-congruent actions or during action execution alone. We found that movement velocities and the trajectory deviations of the executed actions increased during the observation of congruent actions compared to the observation of non-congruent actions or action execution alone. This facilitation was also discernible in the motor-related potentials of the participants; the motor-related potentials were transiently more negative in the congruent condition around the onset of the executed movement, which occurred 300 ms after the onset of the observed movement. This facilitation seemed to depend not only on spatial congruency but also on the optimal temporal relationship of the observation and execution events. PMID:24133437
Neurons in the Frontal Lobe Encode the Value of Multiple Decision Variables
Kennerley, Steven W.; Dahmubed, Aspandiar F.; Lara, Antonio H.; Wallis, Jonathan D.
2009-01-01
A central question in behavioral science is how we select among choice alternatives to obtain consistently the most beneficial outcomes. Three variables are particularly important when making a decision: the potential payoff, the probability of success, and the cost in terms of time and effort. A key brain region in decision making is the frontal cortex as damage here impairs the ability to make optimal choices across a range of decision types. We simultaneously recorded the activity of multiple single neurons in the frontal cortex while subjects made choices involving the three aforementioned decision variables. This enabled us to contrast the relative contribution of the anterior cingulate cortex (ACC), the orbito-frontal cortex, and the lateral prefrontal cortex to the decision-making process. Neurons in all three areas encoded value relating to choices involving probability, payoff, or cost manipulations. However, the most significant signals were in the ACC, where neurons encoded multiplexed representations of the three different decision variables. This supports the notion that the ACC is an important component of the neural circuitry underlying optimal decision making. PMID:18752411
Sexually Monomorphic Maps and Dimorphic Responses in Rat Genital Cortex.
Lenschow, Constanze; Copley, Sean; Gardiner, Jayne M; Talbot, Zoe N; Vitenzon, Ariel; Brecht, Michael
2016-01-11
Mammalian external genitals show sexual dimorphism [1, 2] and can change size and shape upon sexual arousal. Genitals feature prominently in the oldest pieces of figural art [3] and phallic depictions of penises informed psychoanalytic thought about sexuality [4, 5]. Despite this longstanding interest, the neural representations of genitals are still poorly understood [6]. In somatosensory cortex specifically, many studies did not detect any cortical representation of genitals [7-9]. Studies in humans debate whether genitals are represented displaced below the foot of the cortical body map [10-12] or whether they are represented somatotopically [13-15]. We wondered what a high-resolution mapping of genital representations might tell us about the sexual differentiation of the mammalian brain. We identified genital responses in rat somatosensory cortex in a region previously assigned as arm/leg cortex. Genital responses were more common in males than in females. Despite such response dimorphism, we observed a stunning anatomical monomorphism of cortical penis and clitoris input maps revealed by cytochrome-oxidase-staining of cortical layer 4. Genital representations were somatotopic and bilaterally symmetric, and their relative size increased markedly during puberty. Size, shape, and erect posture give the cortical penis representation a phallic appearance pointing to a role in sexually aroused states. Cortical genital neurons showed unusual multi-body-part responses and sexually dimorphic receptive fields. Specifically, genital neurons were co-activated by distant body regions, which are touched during mounting in the respective sex. Genital maps indicate a deep homology of penis and clitoris representations in line with a fundamentally bi-sexual layout [16] of the vertebrate brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Scale-Free Brain-Wave Music from Simultaneously EEG and fMRI Recordings
Lu, Jing; Wu, Dan; Yang, Hua; Luo, Cheng; Li, Chaoyi; Yao, Dezhong
2012-01-01
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner's law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain. PMID:23166768
Dela Cruz, Julie A D; Coke, Tricia; Bodnar, Richard J
2016-08-24
This study uses cellular c-fos activation to assess effects of novel ingestion of fat and sugar on brain dopamine (DA) pathways in rats. Intakes of sugars and fats are mediated by their innate attractions as well as learned preferences. Brain dopamine, especially meso-limbic and meso-cortical projections from the ventral tegmental area (VTA), has been implicated in both of these unlearned and learned responses. The concept of distributed brain networks, wherein several sites and transmitter/peptide systems interact, has been proposed to mediate palatable food intake, but there is limited evidence empirically demonstrating such actions. Thus, sugar intake elicits DA release and increases c-fos-like immunoreactivity (FLI) from individual VTA DA projection zones including the nucleus accumbens (NAC), amygdala (AMY) and medial prefrontal cortex (mPFC) as well as the dorsal striatum. Further, central administration of selective DA receptor antagonists into these sites differentially reduce acquisition and expression of conditioned flavor preferences elicited by sugars or fats. One approach by which to determine whether these sites interacted as a distributed brain network in response to sugar or fat intake would be to simultaneous evaluate whether the VTA and its major mesotelencephalic DA projection zones (prelimbic and infralimbic mPFC, core and shell of the NAc, basolateral and central-cortico-medial AMY) as well as the dorsal striatum would display coordinated and simultaneous FLI activation after oral, unconditioned intake of corn oil (3.5%), glucose (8%), fructose (8%) and saccharin (0.2%) solutions. This approach is a successful first step in identifying the feasibility of using cellular c-fos activation simultaneously across relevant brain sites to study reward-related learning in ingestion of palatable food in rodents.
Fonoff, Erich Talamoni; Azevedo, Angelo; Angelos, Jairo Silva Dos; Martinez, Raquel Chacon Ruiz; Navarro, Jessie; Reis, Paul Rodrigo; Sepulveda, Miguel Ernesto San Martin; Cury, Rubens Gisbert; Ghilardi, Maria Gabriela Dos Santos; Teixeira, Manoel Jacobsen; Lopez, William Omar Contreras
2016-07-01
OBJECT Currently, bilateral procedures involve 2 sequential implants in each of the hemispheres. The present report demonstrates the feasibility of simultaneous bilateral procedures during the implantation of deep brain stimulation (DBS) leads. METHODS Fifty-seven patients with movement disorders underwent bilateral DBS implantation in the same study period. The authors compared the time required for the surgical implantation of deep brain electrodes in 2 randomly assigned groups. One group of 28 patients underwent traditional sequential electrode implantation, and the other 29 patients underwent simultaneous bilateral implantation. Clinical outcomes of the patients with Parkinson's disease (PD) who had undergone DBS implantation of the subthalamic nucleus using either of the 2 techniques were compared. RESULTS Overall, a reduction of 38.51% in total operating time for the simultaneous bilateral group (136.4 ± 20.93 minutes) as compared with that for the traditional consecutive approach (220.3 ± 27.58 minutes) was observed. Regarding clinical outcomes in the PD patients who underwent subthalamic nucleus DBS implantation, comparing the preoperative off-medication condition with the off-medication/on-stimulation condition 1 year after the surgery in both procedure groups, there was a mean 47.8% ± 9.5% improvement in the Unified Parkinson's Disease Rating Scale Part III (UPDRS-III) score in the simultaneous group, while the sequential group experienced 47.5% ± 15.8% improvement (p = 0.96). Moreover, a marked reduction in the levodopa-equivalent dose from preoperatively to postoperatively was similar in these 2 groups. The simultaneous bilateral procedure presented major advantages over the traditional sequential approach, with a shorter total operating time. CONCLUSIONS A simultaneous stereotactic approach significantly reduces the operation time in bilateral DBS procedures, resulting in decreased microrecording time, contributing to the optimization of functional stereotactic procedures.
Multiobjective synchronization of coupled systems
NASA Astrophysics Data System (ADS)
Tang, Yang; Wang, Zidong; Wong, W. K.; Kurths, Jürgen; Fang, Jian-an
2011-06-01
In this paper, multiobjective synchronization of chaotic systems is investigated by especially simultaneously minimizing optimization of control cost and convergence speed. The coupling form and coupling strength are optimized by an improved multiobjective evolutionary approach that includes a hybrid chromosome representation. The hybrid encoding scheme combines binary representation with real number representation. The constraints on the coupling form are also considered by converting the multiobjective synchronization into a multiobjective constraint problem. In addition, the performances of the adaptive learning method and non-dominated sorting genetic algorithm-II as well as the effectiveness and contributions of the proposed approach are analyzed and validated through the Rössler system in a chaotic or hyperchaotic regime and delayed chaotic neural networks.
Synaptic clustering within dendrites: an emerging theory of memory formation
Kastellakis, George; Cai, Denise J.; Mednick, Sara C.; Silva, Alcino J.; Poirazi, Panayiota
2015-01-01
It is generally accepted that complex memories are stored in distributed representations throughout the brain, however the mechanisms underlying these representations are not understood. Here, we review recent findings regarding the subcellular mechanisms implicated in memory formation, which provide evidence for a dendrite-centered theory of memory. Plasticity-related phenomena which affect synaptic properties, such as synaptic tagging and capture, synaptic clustering, branch strength potentiation and spinogenesis provide the foundation for a model of memory storage that relies heavily on processes operating at the dendrite level. The emerging picture suggests that clusters of functionally related synapses may serve as key computational and memory storage units in the brain. We discuss both experimental evidence and theoretical models that support this hypothesis and explore its advantages for neuronal function. PMID:25576663
Hosseinbor, A. Pasha; Chung, Moo K.; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matt; Alexander, Andrew L.; Davidson, Richard J.
2014-01-01
We present a novel surface parameterization technique using hyperspherical harmonics (HSH) in representing compact, multiple, disconnected brain subcortical structures as a single analytic function. The proposed hyperspherical harmonic representation (HyperSPHARM) has many advantages over the widely used spherical harmonic (SPHARM) parameterization technique. SPHARM requires flattening 3D surfaces to 3D sphere which can be time consuming for large surface meshes, and can’t represent multiple disconnected objects with single parameterization. On the other hand, HyperSPHARM treats 3D object, via simple stereographic projection, as a surface of 4D hypersphere with extremely large radius, hence avoiding the computationally demanding flattening process. HyperSPHARM is shown to achieve a better reconstruction with only 5 basis compared to SPHARM that requires more than 441. PMID:24505716
Topographic mapping--the olfactory system.
Imai, Takeshi; Sakano, Hitoshi; Vosshall, Leslie B
2010-08-01
Sensory systems must map accurate representations of the external world in the brain. Although the physical senses of touch and vision build topographic representations of the spatial coordinates of the body and the field of view, the chemical sense of olfaction maps discontinuous features of chemical space, comprising an extremely large number of possible odor stimuli. In both mammals and insects, olfactory circuits are wired according to the convergence of axons from sensory neurons expressing the same odorant receptor. Synapses are organized into distinctive spherical neuropils--the olfactory glomeruli--that connect sensory input with output neurons and local modulatory interneurons. Although there is a strong conservation of form in the olfactory maps of mammals and insects, they arise using divergent mechanisms. Olfactory glomeruli provide a unique solution to the problem of mapping discontinuous chemical space onto the brain.
Neural representations of kinematic laws of motion: evidence for action-perception coupling.
Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar
2007-12-18
Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.
Magnetic resonance brain tissue segmentation based on sparse representations
NASA Astrophysics Data System (ADS)
Rueda, Andrea
2015-12-01
Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).
Distributed representation of visual objects by single neurons in the human brain.
Valdez, André B; Papesh, Megan H; Treiman, David M; Smith, Kris A; Goldinger, Stephen D; Steinmetz, Peter N
2015-04-01
It remains unclear how single neurons in the human brain represent whole-object visual stimuli. While recordings in both human and nonhuman primates have shown distributed representations of objects (many neurons encoding multiple objects), recordings of single neurons in the human medial temporal lobe, taken as subjects' discriminated objects during multiple presentations, have shown gnostic representations (single neurons encoding one object). Because some studies suggest that repeated viewing may enhance neural selectivity for objects, we had human subjects discriminate objects in a single, more naturalistic viewing session. We found that, across 432 well isolated neurons recorded in the hippocampus and amygdala, the average fraction of objects encoded was 26%. We also found that more neurons encoded several objects versus only one object in the hippocampus (28 vs 18%, p < 0.001) and in the amygdala (30 vs 19%, p < 0.001). Thus, during realistic viewing experiences, typical neurons in the human medial temporal lobe code for a considerable range of objects, across multiple semantic categories. Copyright © 2015 the authors 0270-6474/15/355180-07$15.00/0.
Maintenance and Representation of Mind Wandering during Resting-State fMRI.
Chou, Ying-Hui; Sundman, Mark; Whitson, Heather E; Gaur, Pooja; Chu, Mei-Lan; Weingarten, Carol P; Madden, David J; Wang, Lihong; Kirste, Imke; Joliot, Marc; Diaz, Michele T; Li, Yi-Ju; Song, Allen W; Chen, Nan-Kuei
2017-01-12
Major advances in resting-state functional magnetic resonance imaging (fMRI) techniques in the last two decades have provided a tool to better understand the functional organization of the brain both in health and illness. Despite such developments, characterizing regulation and cerebral representation of mind wandering, which occurs unavoidably during resting-state fMRI scans and may induce variability of the acquired data, remains a work in progress. Here, we demonstrate that a decrease or decoupling in functional connectivity involving the caudate nucleus, insula, medial prefrontal cortex and other domain-specific regions was associated with more sustained mind wandering in particular thought domains during resting-state fMRI. Importantly, our findings suggest that temporal and between-subject variations in functional connectivity of above-mentioned regions might be linked with the continuity of mind wandering. Our study not only provides a preliminary framework for characterizing the maintenance and cerebral representation of different types of mind wandering, but also highlights the importance of taking mind wandering into consideration when studying brain organization with resting-state fMRI in the future.
Computing with scale-invariant neural representations
NASA Astrophysics Data System (ADS)
Howard, Marc; Shankar, Karthik
The Weber-Fechner law is perhaps the oldest quantitative relationship in psychology. Consider the problem of the brain representing a function f (x) . Different neurons have receptive fields that support different parts of the range, such that the ith neuron has a receptive field at xi. Weber-Fechner scaling refers to the finding that the width of the receptive field scales with xi as does the difference between the centers of adjacent receptive fields. Weber-Fechner scaling is exponentially resource-conserving. Neurophysiological evidence suggests that neural representations obey Weber-Fechner scaling in the visual system and perhaps other systems as well. We describe an optimality constraint that is solved by Weber-Fechner scaling, providing an information-theoretic rationale for this principle of neural coding. Weber-Fechner scaling can be generated within a mathematical framework using the Laplace transform. Within this framework, simple computations such as translation, correlation and cross-correlation can be accomplished. This framework can in principle be extended to provide a general computational language for brain-inspired cognitive computation on scale-invariant representations. Supported by NSF PHY 1444389 and the BU Initiative for the Physics and Mathematics of Neural Systems,.
Know thyself: behavioral evidence for a structural representation of the human body.
Rusconi, Elena; Gonzaga, Mirandola; Adriani, Michela; Braun, Christoph; Haggard, Patrick
2009-01-01
Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain. We developed an inter-manual version of the classical "in-between" finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture. Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation.
Know Thyself: Behavioral Evidence for a Structural Representation of the Human Body
Rusconi, Elena; Gonzaga, Mirandola; Adriani, Michela; Braun, Christoph; Haggard, Patrick
2009-01-01
Background Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain. Methods and Findings We developed an inter-manual version of the classical “in-between” finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture. Conclusions Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation. PMID:19412538
Functional Plasticity in the Absence of Structural Change.
Krasovsky, Tal; Landa, Jana; Bar, Orly; Jaana, Ahonniska-Assa; Livny, Abigail; Tsarfaty, Galia; Silberg, Tamar
2017-04-01
This work presents a case of a young woman with apraxia and a severe body scheme disorder, 10 years after a childhood frontal and occipitoparietal brain injury. Despite specific limitations, she is independent in performing all activities of daily living. A battery of tests was administered to evaluate praxis and body representations. Specifically, the Hand Laterality Test was used to compare RS's dynamic body representation to that of healthy controls (N = 14). Results demonstrated RS's severe praxis impairment, and the Hand Laterality Test revealed deficits in accuracy and latency of motor imagery, suggesting a significant impairment in dynamic body representation. However, semantic and structural body representations were intact. These results, coupled with frequent use of verbalizations as a strategy, suggest a possible ventral compensatory mechanism (top-down processing) for dorsal stream deficits, which may explain RS's remarkable recovery of activities of daily living. The link between praxis and dynamic body representation is discussed.
Tatu, Laurent; Bogousslavsky, Julien
2018-01-01
Body representation disorders continue to be mysterious and involve the anatomical substrate that underlies the mental representation of the body. These disorders sit on the boundaries of neurological and psychiatric diseases. We present the main characteristics of 3 examples of body representation disorders: phantom sensations, supernumerary phantom limb, and apotemnophilia. The dysfunction of anatomical circuits that regulate body representation can sometimes have paradoxical features. In the case of phantom sensations, the patient feels the painful subjective sensation of the existence of the lost part of the body after amputation, surgery or trauma. In case of apotemnophilia, now named body integrity identity disorder, the subject wishes for the disappearance of the existing and normal limb, which can occasionally lead to self-amputation. More rarely, a brain-damaged patient with 4 existing limbs can report the existence of a supernumerary phantom limb. © 2018 S. Karger AG, Basel.
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
A multimodal biometric authentication system based on 2D and 3D palmprint features
NASA Astrophysics Data System (ADS)
Aggithaya, Vivek K.; Zhang, David; Luo, Nan
2008-03-01
This paper presents a new personal authentication system that simultaneously exploits 2D and 3D palmprint features. Here, we aim to improve the accuracy and robustness of existing palmprint authentication systems using 3D palmprint features. The proposed system uses an active stereo technique, structured light, to capture 3D image or range data of the palm and a registered intensity image simultaneously. The surface curvature based method is employed to extract features from 3D palmprint and Gabor feature based competitive coding scheme is used for 2D representation. We individually analyze these representations and attempt to combine them with score level fusion technique. Our experiments on a database of 108 subjects achieve significant improvement in performance (Equal Error Rate) with the integration of 3D features as compared to the case when 2D palmprint features alone are employed.
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
The Radical Plasticity Thesis: How the Brain Learns to be Conscious
Cleeremans, Axel
2011-01-01
In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the “Radical Plasticity Thesis.” In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves “signal detection on the mind”; the conscious mind is the brain's (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks. PMID:21687455
The Radical Plasticity Thesis: How the Brain Learns to be Conscious.
Cleeremans, Axel
2011-01-01
In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain's continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the "Radical Plasticity Thesis." In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves "signal detection on the mind"; the conscious mind is the brain's (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks.
de Borst, A W; de Gelder, B
2016-08-01
The neural basis of emotion perception has mostly been investigated with single face or body stimuli. However, in daily life one may also encounter affective expressions by groups, e.g. an angry mob or an exhilarated concert crowd. In what way is brain activity modulated when several individuals express similar rather than different emotions? We investigated this question using an experimental design in which we presented two stimuli simultaneously, with same or different emotional expressions. We hypothesized that, in the case of two same-emotion stimuli, brain activity would be enhanced, while in the case of two different emotions, one emotion would interfere with the effect of the other. The results showed that the simultaneous perception of different affective body expressions leads to a deactivation of the amygdala and a reduction of cortical activity. It was revealed that the processing of fearful bodies, compared with different-emotion bodies, relied more strongly on saliency and action triggering regions in inferior parietal lobe and insula, while happy bodies drove the occipito-temporal cortex more strongly. We showed that this design could be used to uncover important differences between brain networks underlying fearful and happy emotions. The enhancement of brain activity for unambiguous affective signals expressed by several people simultaneously supports adaptive behaviour in critical situations. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Credit Assignment in Multiple Goal Embodied Visuomotor Behavior
Rothkopf, Constantin A.; Ballard, Dana H.
2010-01-01
The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain's abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise. PMID:21833235
Representation learning via Dual-Autoencoder for recommendation.
Zhuang, Fuzhen; Zhang, Zhiqiang; Qian, Mingda; Shi, Chuan; Xie, Xing; He, Qing
2017-06-01
Recommendation has provoked vast amount of attention and research in recent decades. Most previous works employ matrix factorization techniques to learn the latent factors of users and items. And many subsequent works consider external information, e.g., social relationships of users and items' attributions, to improve the recommendation performance under the matrix factorization framework. However, matrix factorization methods may not make full use of the limited information from rating or check-in matrices, and achieve unsatisfying results. Recently, deep learning has proven able to learn good representation in natural language processing, image classification, and so on. Along this line, we propose a new representation learning framework called Recommendation via Dual-Autoencoder (ReDa). In this framework, we simultaneously learn the new hidden representations of users and items using autoencoders, and minimize the deviations of training data by the learnt representations of users and items. Based on this framework, we develop a gradient descent method to learn hidden representations. Extensive experiments conducted on several real-world data sets demonstrate the effectiveness of our proposed method compared with state-of-the-art matrix factorization based methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yanti, Y. R.; Amin, S. M.; Sulaiman, R.
2018-01-01
This study described representation of students who have musical, logical-mathematic and naturalist intelligence in solving a problem. Subjects were selected on the basis of multiple intelligence tests (TPM) consists of 108 statements, with 102 statements adopted from Chislet and Chapman and 6 statements equal to eksistensial intelligences. Data were analyzed based on problem-solving tests (TPM) and interviewing. See the validity of the data then problem-solving tests (TPM) and interviewing is given twice with an analyzed using the representation indikator and the problem solving step. The results showed that: the stage of presenting information known, stage of devising a plan, and stage of carrying out the plan those three subjects were using same form of representation. While he stage of presenting information asked and stage of looking back, subject of logical-mathematic was using different forms of representation with subjects of musical and naturalist intelligence. From this research is expected to provide input to the teacher in determining the learning strategy that will be used by considering the representation of students with the basis of multiple intelligences.
Multichannel activity propagation across an engineered axon network
NASA Astrophysics Data System (ADS)
Chen, H. Isaac; Wolf, John A.; Smith, Douglas H.
2017-04-01
Objective. Although substantial progress has been made in mapping the connections of the brain, less is known about how this organization translates into brain function. In particular, the massive interconnectivity of the brain has made it difficult to specifically examine data transmission between two nodes of the connectome, a central component of the ‘neural code.’ Here, we investigated the propagation of multiple streams of asynchronous neuronal activity across an isolated in vitro ‘connectome unit.’ Approach. We used the novel technique of axon stretch growth to create a model of a long-range cortico-cortical network, a modular system consisting of paired nodes of cortical neurons connected by axon tracts. Using optical stimulation and multi-electrode array recording techniques, we explored how input patterns are represented by cortical networks, how these representations shift as they are transmitted between cortical nodes and perturbed by external conditions, and how well the downstream node distinguishes different patterns. Main results. Stimulus representations included direct, synaptic, and multiplexed responses that grew in complexity as the distance between the stimulation source and recorded neuron increased. These representations collapsed into patterns with lower information content at higher stimulation frequencies. With internodal activity propagation, a hierarchy of network pathways, including latent circuits, was revealed using glutamatergic blockade. As stimulus channels were added, divergent, non-linear effects were observed in local versus distant network layers. Pairwise difference analysis of neuronal responses suggested that neuronal ensembles generally outperformed individual cells in discriminating input patterns. Significance. Our data illuminate the complexity of spiking activity propagation in cortical networks in vitro, which is characterized by the transformation of an input into myriad outputs over several network layers. These results provide insight into how the brain potentially processes information and generates the neural code and could guide the development of clinical therapies based on multichannel brain stimulation.
Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash
2015-01-01
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198
Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash
2014-01-01
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
A Subdivision-Based Representation for Vector Image Editing.
Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou
2012-11-01
Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.
Two spatial memories are not better than one: evidence of exclusivity in memory for object location.
Baguley, Thom; Lansdale, Mark W; Lines, Lorna K; Parkin, Jennifer K
2006-05-01
This paper studies the dynamics of attempting to access two spatial memories simultaneously and its implications for the accuracy of recall. Experiment 1 demonstrates in a range of conditions that two cues pointing to different experiences of the same object location produce little or no higher recall than that observed with a single cue. Experiment 2 confirms this finding in a within-subject design where both cues have previously elicited recall. Experiment 3 shows that these findings are only consistent with a model in which two representations of the same object location are mutually exclusive at both encoding and retrieval, and inconsistent with models that assume information from both representations is available. We propose that these representations quantify directionally specific judgments of location relative to specific anchor points in the stimulus; a format that precludes the parallel processing of like representations. Finally, we consider the apparent paradox of how such representations might contribute to the acquisition of spatial knowledge from multiple experiences of the same stimuli.
Multiscale wavelet representations for mammographic feature analysis
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
What can we learn about beat perception by comparing brain signals and stimulus envelopes?
Henry, Molly J; Herrmann, Björn; Grahn, Jessica A
2017-01-01
Entrainment of neural oscillations on multiple time scales is important for the perception of speech. Musical rhythms, and in particular the perception of a regular beat in musical rhythms, is also likely to rely on entrainment of neural oscillations. One recently proposed approach to studying beat perception in the context of neural entrainment and resonance (the "frequency-tagging" approach) has received an enthusiastic response from the scientific community. A specific version of the approach involves comparing frequency-domain representations of acoustic rhythm stimuli to the frequency-domain representations of neural responses to those rhythms (measured by electroencephalography, EEG). The relative amplitudes at specific EEG frequencies are compared to the relative amplitudes at the same stimulus frequencies, and enhancements at beat-related frequencies in the EEG signal are interpreted as reflecting an internal representation of the beat. Here, we show that frequency-domain representations of rhythms are sensitive to the acoustic features of the tones making up the rhythms (tone duration, onset/offset ramp duration); in fact, relative amplitudes at beat-related frequencies can be completely reversed by manipulating tone acoustics. Crucially, we show that changes to these acoustic tone features, and in turn changes to the frequency-domain representations of rhythms, do not affect beat perception. Instead, beat perception depends on the pattern of onsets (i.e., whether a rhythm has a simple or complex metrical structure). Moreover, we show that beat perception can differ for rhythms that have numerically identical frequency-domain representations. Thus, frequency-domain representations of rhythms are dissociable from beat perception. For this reason, we suggest caution in interpreting direct comparisons of rhythms and brain signals in the frequency domain. Instead, we suggest that combining EEG measurements of neural signals with creative behavioral paradigms is of more benefit to our understanding of beat perception.
A prototype MR insertable brain PET using tileable GAPD arrays.
Hong, Key Jo; Choi, Yong; Jung, Jin Ho; Kang, Jihoon; Hu, Wei; Lim, Hyun Keong; Huh, Yoonsuk; Kim, Sangsu; Jung, Ji Woong; Kim, Kyu Bom; Song, Myung Sung; Park, Hyun-Wook
2013-04-01
The aim of this study was to develop a prototype magnetic resonance (MR)-compatible positron emission tomography (PET) that can be inserted into a MR imager and that allows simultaneous PET and MR imaging of the human brain. This paper reports the initial results of the authors' prototype brain PET system operating within a 3-T magnetic resonance imaging (MRI) system using newly developed Geiger-mode avalanche photodiode (GAPD)-based PET detectors, long flexible flat cables, position decoder circuit with high multiplexing ratio, and digital signal processing with field programmable gate array-based analog to digital converter boards. A brain PET with 72 detector modules arranged in a ring was constructed and mounted in a 3-T MRI. Each PET module was composed of cerium-doped lutetium yttrium orthosilicate (LYSO) crystals coupled to a tileable GAPD. The GAPD output charge signals were transferred to preamplifiers using 3 m long flat cables. The LYSO and GAPD were located inside the MR bore and all electronics were positioned outside the MR bore. The PET detector performance was investigated both outside and inside the MRI, and MR image quality was evaluated with and without the PET system. The performance of the PET detector when operated inside the MRI during MR image acquisition showed no significant change in energy resolution and count rates, except for a slight degradation in timing resolution with an increase from 4.2 to 4.6 ns. Simultaneous PET/MR images of a hot-rod and Hoffman brain phantom were acquired in a 3-T MRI. Rods down to a diameter of 3.5 mm were resolved in the hot-rod PET image. The activity distribution patterns between the white and gray matter in the Hoffman brain phantom were well imaged. The hot-rod and Hoffman brain phantoms on the simultaneously acquired MR images obtained with standard sequences were observed without any noticeable artifacts, although MR image quality requires some improvement. These results demonstrate that the simultaneous acquisition of PET and MR images is feasible using the MR insertable PET developed in this study.
Individual differences in simultaneous color constancy are related to working memory.
Allen, Elizabeth C; Beilock, Sian L; Shevell, Steven K
2012-02-01
Few studies have investigated the possible role of higher-level cognitive mechanisms in color constancy. Following up on previous work with successive color constancy [J. Exper. Psychol. Learn. Mem. Cogn. 37, 1014 (2011)], the current study examined the relation between simultaneous color constancy and working memory-the ability to maintain a desired representation while suppressing irrelevant information. Higher working memory was associated with poorer simultaneous color constancy of a chromatically complex stimulus. Ways in which the executive attention mechanism of working memory may account for this are discussed. This finding supports a role for higher-level cognitive mechanisms in color constancy and is the first to demonstrate a relation between simultaneous color constancy and a complex cognitive ability. © 2012 Optical Society of America
Brain science: from the very small to the very large.
Kreiman, Gabriel
2007-09-04
We still lack a clear understanding of how brain imaging signals relate to neuronal activity. Recent work shows that the simultaneous activity of neuronal ensembles strongly correlates with local field potentials and imaging measurements.
Working memory resources are shared across sensory modalities.
Salmela, V R; Moisala, M; Alho, K
2014-10-01
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.
Distributed Neural Processing Predictors of Multi-dimensional Properties of Affect
Bush, Keith A.; Inman, Cory S.; Hamann, Stephan; Kilts, Clinton D.; James, G. Andrew
2017-01-01
Recent evidence suggests that emotions have a distributed neural representation, which has significant implications for our understanding of the mechanisms underlying emotion regulation and dysregulation as well as the potential targets available for neuromodulation-based emotion therapeutics. This work adds to this evidence by testing the distribution of neural representations underlying the affective dimensions of valence and arousal using representational models that vary in both the degree and the nature of their distribution. We used multi-voxel pattern classification (MVPC) to identify whole-brain patterns of functional magnetic resonance imaging (fMRI)-derived neural activations that reliably predicted dimensional properties of affect (valence and arousal) for visual stimuli viewed by a normative sample (n = 32) of demographically diverse, healthy adults. Inter-subject leave-one-out cross-validation showed whole-brain MVPC significantly predicted (p < 0.001) binarized normative ratings of valence (positive vs. negative, 59% accuracy) and arousal (high vs. low, 56% accuracy). We also conducted group-level univariate general linear modeling (GLM) analyses to identify brain regions whose response significantly differed for the contrasts of positive versus negative valence or high versus low arousal. Multivoxel pattern classifiers using voxels drawn from all identified regions of interest (all-ROIs) exhibited mixed performance; arousal was predicted significantly better than chance but worse than the whole-brain classifier, whereas valence was not predicted significantly better than chance. Multivoxel classifiers derived using individual ROIs generally performed no better than chance. Although performance of the all-ROI classifier improved with larger ROIs (generated by relaxing the clustering threshold), performance was still poorer than the whole-brain classifier. These findings support a highly distributed model of neural processing for the affective dimensions of valence and arousal. Finally, joint error analyses of the MVPC hyperplanes encoding valence and arousal identified regions within the dimensional affect space where multivoxel classifiers exhibited the greatest difficulty encoding brain states – specifically, stimuli of moderate arousal and high or low valence. In conclusion, we highlight new directions for characterizing affective processing for mechanistic and therapeutic applications in affective neuroscience. PMID:28959198
Martynova, O; Portnova, G; Orlov, I
2016-01-01
According to psychological research erotic images are evaluated in the context of positive emotions as the most intense, most associated with emotional arousal, among the variety of pleasant and unpleasant stimuli. However it is difficult to separate areas of the brain that are related to the general emotional process from the activity of the brain areas involved in neuronal representations of reward system. The purpose of this study was to determine differences in the brain activity using functional magnetic resonance imaging (fMRI) in male subjects in evaluating an intensity of pleasant images, including erotic, or unpleasant and neutral pictures. When comparing the condition with evaluation of the pleasant erotic images with conditions containing neutral or unpleasant stimuli, a significant activation was observed in the posterior cingulate cortex; the prefrontal cortex and the right globus pallidus. An increased activity of the right anterior central gyrus was observed in the conditions related to evaluation of pleasant and neutral stimuli. Thus, in the process of evaluating the intensity of emotional images of an erotic nature the active brain areas were related not only to neuronal representations of emotions, but also to motivations and control system of emotional arousal, which should be taken into account while using erotic pictures as intensive positive emotional stimuli.
Oscillatory networks of high-level mental alignment: A perspective-taking MEG study.
Seymour, R A; Wang, H; Rippon, G; Kessler, K
2018-08-15
Mentally imagining another's perspective is a high-level social process, reliant on manipulating internal representations of the self in an embodied manner. Recently Wang et al. (2016) showed that theta-band (3-7 Hz) brain oscillations within the right temporo-parietal junction (rTPJ) and brain regions coding for motor/body schema contribute to the process of perspective-taking. Using a similar paradigm, we set out to unravel the extended functional brain network in detail. Increasing the angle between self and other perspective was accompanied by longer reaction times and increases in theta power within rTPJ, right lateral prefrontal cortex (PFC) and right anterior cingulate cortex (ACC). Using Granger-causality, we showed that lateral PFC and ACC exert top-down influence over rTPJ, indicative of executive control processes required for managing conflicts between self and other perspectives. Finally, we quantified patterns of whole-brain phase coupling in relation to the rTPJ. Results suggest that rTPJ increases its theta-band phase synchrony with brain regions involved in mentalizing and regions coding for motor/body schema; whilst decreasing synchrony to visual regions. Implications for neurocognitive models are discussed, and it is proposed that rTPJ acts as a 'hub' to route bottom-up visual information to internal representations of the self during perspective-taking, co-ordinated by theta-band oscillations. Copyright © 2018 Elsevier Inc. All rights reserved.
Distributed value representation in the medial prefrontal cortex during intertemporal choices.
Wang, Qiang; Luo, Shan; Monterosso, John; Zhang, Jintao; Fang, Xiaoyi; Dong, Qi; Xue, Gui
2014-05-28
The ability to resist current temptations in favor of long-term benefits is a critical human capacity. Despite the extensive studies on the neural mechanisms of intertemporal choices, how the subjective value of immediate and delayed rewards is represented and compared in the brain remains to be elucidated. The present fMRI study addressed this question by simultaneously and independently manipulating the magnitude of immediate and delayed rewards in an intertemporal decision task, combined with univariate analysis and multiple voxel pattern analysis. We found that activities in the posterior portion of the dorsal medial prefrontal cortex (DmPFC) were modulated by the value of immediate options, whereas activities in the adjacent anterior DmPFC were modulated by the subjective value of delayed options. Brain signal change in the ventral mPFC was positively correlated with the "relative value" (the absolute difference of subjective value between two intertemporal alternatives). In contrast, the dorsal anterior cingulate cortex activity was negatively correlated with the relative value. These results suggest that immediate and delayed rewards are separately represented in the dorsal mPFC and compared in the ventral mPFC to guide decisions. The functional dissociation of posterior and anterior DmPFC in representing immediate and delayed reward is consistent with the general structural and functional architecture of the prefrontal cortex and may provide a neural basis for human's unique capacity to delayed gratification. Copyright © 2014 the authors 0270-6474/14/347522-09$15.00/0.
Vingerhoets, Guy; Vandekerckhove, Elisabeth; Honoré, Pieterjan; Vandemaele, Pieter; Achten, Eric
2011-06-01
This study aims to reveal the neural correlates of planning and executing tool use pantomimes and explores the brain's response to pantomiming the use of unfamiliar tools. Sixteen right-handed volunteers planned and executed pantomimes of equally graspable familiar and unfamiliar tools while undergoing fMRI. During the planning of these pantomimes, we found bilateral temporo-occipital and predominantly left hemispheric frontal and parietal activation. The execution of the pantomimes produced additional activation in frontal and sensorimotor regions. In the left posterior parietal region both familiar and unfamiliar tool pantomimes elicit peak activity in the anterior portion of the lateral bank of the intraparietal sulcus--A region associated with the representation of action goals. The cerebral activation during these pantomimes is remarkably similar for familiar and unfamiliar tools, and direct comparisons revealed only few differences. First, the left cuneus is significantly active during the planning of pantomimes of unfamiliar tools, reflecting increased visual processing of the novel objects. Second, executing (but not planning) familiar tool pantomimes showed significant activation on the convex portion of the inferior parietal lobule, a region believed to serve as a repository for skilled object-related gestures. Given the striking similarity in brain activation while pantomiming familiar and unfamiliar tools, we argue that normal subjects use both action semantics and function from structure inferences simultaneously and interactively to give rise to flexible object-to-goal directed behavior. Copyright © 2010 Wiley-Liss, Inc.
Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot
Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.
2014-01-01
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350
Neural representations of magnitude for natural and rational numbers.
DeWolf, Melissa; Chiang, Jeffrey N; Bassok, Miriam; Holyoak, Keith J; Monti, Martin M
2016-11-01
Humans have developed multiple symbolic representations for numbers, including natural numbers (positive integers) as well as rational numbers (both fractions and decimals). Despite a considerable body of behavioral and neuroimaging research, it is currently unknown whether different notations map onto a single, fully abstract, magnitude code, or whether separate representations exist for specific number types (e.g., natural versus rational) or number representations (e.g., base-10 versus fractions). We address this question by comparing brain metabolic response during a magnitude comparison task involving (on different trials) integers, decimals, and fractions. Univariate and multivariate analyses revealed that the strength and pattern of activation for fractions differed systematically, within the intraparietal sulcus, from that of both decimals and integers, while the latter two number representations appeared virtually indistinguishable. These results demonstrate that the two major notations formats for rational numbers, fractions and decimals, evoke distinct neural representations of magnitude, with decimals representations being more closely linked to those of integers than to those of magnitude-equivalent fractions. Our findings thus suggest that number representation (base-10 versus fractions) is an important organizational principle for the neural substrate underlying mathematical cognition. Copyright © 2016 Elsevier Inc. All rights reserved.
Kurashige, Hiroki; Yamashita, Yuichi; Hanakawa, Takashi; Honda, Manabu
2018-01-01
Knowledge acquisition is a process in which one actively selects a piece of information from the environment and assimilates it with prior knowledge. However, little is known about the neural mechanism underlying selectivity in knowledge acquisition. Here we executed a 2-day human experiment to investigate the involvement of characteristic spontaneous activity resembling a so-called "preplay" in selectivity in sentence comprehension, an instance of knowledge acquisition. On day 1, we presented 10 sentences (prior sentences) that were difficult to understand on their own. On the following day, we first measured the resting-state functional magnetic resonance imaging (fMRI). Then, we administered a sentence comprehension task using 20 new sentences (posterior sentences). The posterior sentences were also difficult to understand on their own, but some could be associated with prior sentences to facilitate their understanding. Next, we measured the posterior sentence-induced fMRI to identify the neural representation. From the resting-state fMRI, we extracted the appearances of activity patterns similar to the neural representations for posterior sentences. Importantly, the resting-state fMRI was measured before giving the posterior sentences, and thus such appearances could be considered as preplay-like or prototypical neural representations. We compared the intensities of such appearances with the understanding of posterior sentences. This gave a positive correlation between these two variables, but only if posterior sentences were associated with prior sentences. Additional analysis showed the contribution of the entorhinal cortex, rather than the hippocampus, to the correlation. The present study suggests that prior knowledge-based arrangement of neural activity before an experience contributes to the active selection of information to be learned. Such arrangement prior to an experience resembles preplay activity observed in the rodent brain. In terms of knowledge acquisition, the present study leads to a new view of the brain (or more precisely of the brain's knowledge) as an autopoietic system in which the brain (or knowledge) selects what it should learn by itself, arranges preplay-like activity as a position for the new information in advance, and actively reorganizes itself.
Ipser, Alberta; Karlinski, Maayan; Freeman, Elliot D
2018-05-07
Sight and sound are out of synch in different people by different amounts for different tasks. But surprisingly, different concurrent measures of perceptual asynchrony correlate negatively (Freeman et al., 2013). Thus, if vision subjectively leads audition in one individual, the same individual might show a visual lag in other measures of audiovisual integration (e.g., McGurk illusion, Stream-Bounce illusion). This curious negative correlation was first observed between explicit temporal order judgments and implicit phoneme identification tasks, performed concurrently as a dual task, using incongruent McGurk stimuli. Here we used a new set of explicit and implicit tasks and congruent stimuli, to test whether this negative correlation persists across testing sessions, and whether it might be an artifact of using specific incongruent stimuli. None of these manipulations eliminated the negative correlation between explicit and implicit measures. This supports the generalizability and validity of the phenomenon, and offers new theoretical insights into its explanation. Our previously proposed "temporal renormalization" theory assumes that the timings of sensory events registered within the brain's different multimodal subnetworks are each perceived relative to a representation of the typical average timing of such events across the wider network. Our new data suggest that this representation is stable and generic, rather than dependent on specific stimuli or task contexts, and that it may be acquired through experience with a variety of simultaneous stimuli. Our results also add further evidence that speech comprehension may be improved in some individuals by artificially delaying voices relative to lip-movements. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Connaughton, Veronica M; Amiruddin, Azhani; Clunies-Ross, Karen L; French, Noel; Fox, Allison M
2017-05-01
A major model of the cerebral circuits that underpin arithmetic calculation is the triple-code model of numerical processing. This model proposes that the lateralization of mathematical operations is organized across three circuits: a left-hemispheric dominant verbal code; a bilateral magnitude representation of numbers and a bilateral Arabic number code. This study simultaneously measured the blood flow of both middle cerebral arteries using functional transcranial Doppler ultrasonography to assess hemispheric specialization during the performance of both language and arithmetic tasks. The propositions of the triple-code model were assessed in a non-clinical adult group by measuring cerebral blood flow during the performance of multiplication and subtraction problems. Participants were 17 adults aged between 18-27 years. We obtained laterality indices for each type of mathematical operation and compared these in participants with left-hemispheric language dominance. It was hypothesized that blood flow would lateralize to the left hemisphere during the performance of multiplication operations, but would not lateralize during the performance of subtraction operations. Hemispheric blood flow was significantly left lateralized during the multiplication task, but was not lateralized during the subtraction task. Compared to high spatial resolution neuroimaging techniques previously used to measure cerebral lateralization, functional transcranial Doppler ultrasonography is a cost-effective measure that provides a superior temporal representation of arithmetic cognition. These results provide support for the triple-code model of arithmetic processing and offer complementary evidence that multiplication operations are processed differently in the adult brain compared to subtraction operations. Copyright © 2017 Elsevier B.V. All rights reserved.
Varlet, Manuel; Novembre, Giacomo; Keller, Peter E
2017-06-01
Spontaneous modulations of corticospinal excitability during action observation have been interpreted as evidence for the activation of internal motor representations equivalent to the observed action. Alternatively or complementary to this perspective, growing evidence shows that motor activity during observation of rhythmic movements can be modulated by direct visuomotor couplings and dynamical entrainment. In-phase and anti-phase entrainment spontaneously occur, characterized by cyclic movements proceeding simultaneously in the same (in-phase) or opposite (anti-phase) direction. Here we investigate corticospinal excitability during the observation of vertical oscillations of an index finger using Transcranial Magnetic Stimulation (TMS). Motor-evoked potentials (MEPs) were recorded from participants' flexor and extensor muscles of the right index finger, placed in either a maximal steady flexion or extension position, with stimulations delivered at maximal flexion, maximal extension or mid-trajectory of the observed finger oscillations. Consistent with the occurrence of dynamical motor entrainment, increased and decreased MEP responses - suggesting the facilitation of stable in-phase and anti-phase relations but not an unstable 90° phase relation - were found in participants' flexors. Anti-phase motor facilitation contrasts with the activation of internal motor representation as it involves activity in the motor system opposite from activity required for the execution of the observed movement. These findings demonstrate the relevance of dynamical entrainment theories and methods for understanding spontaneous motor activity in the brain during action observation and the mechanisms underpinning coordinated movements during social interaction. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hirai, Yasuharu; Nishino, Eri
2015-01-01
Despite its widespread use, high-resolution imaging with multiphoton microscopy to record neuronal signals in vivo is limited to the surface of brain tissue because of limited light penetration. Moreover, most imaging studies do not simultaneously record electrical neural activity, which is, however, crucial to understanding brain function. Accordingly, we developed a photometric patch electrode (PME) to overcome the depth limitation of optical measurements and also enable the simultaneous recording of neural electrical responses in deep brain regions. The PME recoding system uses a patch electrode to excite a fluorescent dye and to measure the fluorescence signal as a light guide, to record electrical signal, and to apply chemicals to the recorded cells locally. The optical signal was analyzed by either a spectrometer of high light sensitivity or a photomultiplier tube depending on the kinetics of the responses. We used the PME in Oregon Green BAPTA-1 AM-loaded avian auditory nuclei in vivo to monitor calcium signals and electrical responses. We demonstrated distinct response patterns in three different nuclei of the ascending auditory pathway. On acoustic stimulation, a robust calcium fluorescence response occurred in auditory cortex (field L) neurons that outlasted the electrical response. In the auditory midbrain (inferior colliculus), both responses were transient. In the brain-stem cochlear nucleus magnocellularis, calcium response seemed to be effectively suppressed by the activity of metabotropic glutamate receptors. In conclusion, the PME provides a powerful tool to study brain function in vivo at a tissue depth inaccessible to conventional imaging devices. PMID:25761950
Hirai, Yasuharu; Nishino, Eri; Ohmori, Harunori
2015-06-01
Despite its widespread use, high-resolution imaging with multiphoton microscopy to record neuronal signals in vivo is limited to the surface of brain tissue because of limited light penetration. Moreover, most imaging studies do not simultaneously record electrical neural activity, which is, however, crucial to understanding brain function. Accordingly, we developed a photometric patch electrode (PME) to overcome the depth limitation of optical measurements and also enable the simultaneous recording of neural electrical responses in deep brain regions. The PME recoding system uses a patch electrode to excite a fluorescent dye and to measure the fluorescence signal as a light guide, to record electrical signal, and to apply chemicals to the recorded cells locally. The optical signal was analyzed by either a spectrometer of high light sensitivity or a photomultiplier tube depending on the kinetics of the responses. We used the PME in Oregon Green BAPTA-1 AM-loaded avian auditory nuclei in vivo to monitor calcium signals and electrical responses. We demonstrated distinct response patterns in three different nuclei of the ascending auditory pathway. On acoustic stimulation, a robust calcium fluorescence response occurred in auditory cortex (field L) neurons that outlasted the electrical response. In the auditory midbrain (inferior colliculus), both responses were transient. In the brain-stem cochlear nucleus magnocellularis, calcium response seemed to be effectively suppressed by the activity of metabotropic glutamate receptors. In conclusion, the PME provides a powerful tool to study brain function in vivo at a tissue depth inaccessible to conventional imaging devices. Copyright © 2015 the American Physiological Society.
Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.
Berger, S B; Reis, D J
1995-02-01
We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.
Simultaneous total cavopulmonary connection and cardiac re-synchronisation therapy.
Nakanishi, Keisuke; Kawasaki, Shiori; Amano, Atsushi
2017-08-01
We report the simultaneous use of cardiac re-synchronisation therapy and total cavopulmonary connection in a patient with dyssynchrony, wide QRS, and cardiac failure. To our knowledge, this simultaneous approach has not been reported previously. On follow-up, we noted that QRS width and brain natriuretic peptide levels improved. In addition, speckle tracking revealed improved synchronisation of ventricular wall motion.
SPHERE: SPherical Harmonic Elastic REgistration of HARDI Data
Yap, Pew-Thian; Chen, Yasheng; An, Hongyu; Yang, Yang; Gilmore, John H.; Lin, Weili
2010-01-01
In contrast to the more common Diffusion Tensor Imaging (DTI), High Angular Resolution Diffusion Imaging (HARDI) allows superior delineation of angular microstructures of brain white matter, and makes possible multiple-fiber modeling of each voxel for better characterization of brain connectivity. However, the complex orientation information afforded by HARDI makes registration of HARDI images more complicated than scalar images. In particular, the question of how much orientation information is needed for satisfactory alignment has not been sufficiently addressed. Low order orientation representation is generally more robust than high order representation, although the latter provides more information for correct alignment of fiber pathways. However, high order representation, when naïvely utilized, might not necessarily be conducive to improving registration accuracy since similar structures with significant orientation differences prior to proper alignment might be mistakenly taken as non-matching structures. We present in this paper a HARDI registration algorithm, called SPherical Harmonic Elastic REgistration (SPHERE), which in a principled means hierarchically extracts orientation information from HARDI data for structural alignment. The image volumes are first registered using robust, relatively direction invariant features derived from the Orientation Distribution Function (ODF), and the alignment is then further refined using spherical harmonic (SH) representation with gradually increasing orders. This progression from non-directional, single-directional to multi-directional representation provides a systematic means of extracting directional information given by diffusion-weighted imaging. Coupled with a template-subject-consistent soft-correspondence-matching scheme, this approach allows robust and accurate alignment of HARDI data. Experimental results show marked increase in accuracy over a state-of-the-art DTI registration algorithm. PMID:21147231
Vicario, David S.
2017-01-01
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds. PMID:28031398