The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-07
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior.
Hansen, Sofie Therese; Hansen, Lars Kai
2017-03-01
Electroencephalography (EEG) can capture brain dynamics in high temporal resolution. By projecting the scalp EEG signal back to its origin in the brain also high spatial resolution can be achieved. Source localized EEG therefore has potential to be a very powerful tool for understanding the functional dynamics of the brain. Solving the inverse problem of EEG is however highly ill-posed as there are many more potential locations of the EEG generators than EEG measurement points. Several well-known properties of brain dynamics can be exploited to alleviate this problem. More short ranging connections exist in the brain than long ranging, arguing for spatially focal sources. Additionally, recent work (Delorme et al., 2012) argues that EEG can be decomposed into components having sparse source distributions. On the temporal side both short and long term stationarity of brain activation are seen. We summarize these insights in an inverse solver, the so-called "Variational Garrote" (Kappen and Gómez, 2013). Using a Markov prior we can incorporate flexible degrees of temporal stationarity. Through spatial basis functions spatially smooth distributions are obtained. Sparsity of these are inherent to the Variational Garrote solver. We name our method the MarkoVG and demonstrate its ability to adapt to the temporal smoothness and spatial sparsity in simulated EEG data. Finally a benchmark EEG dataset is used to demonstrate MarkoVG's ability to recover non-stationary brain dynamics. Copyright © 2016 Elsevier Inc. All rights reserved.
Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.
Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A
2016-08-01
Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.
NASA Astrophysics Data System (ADS)
Boughariou, Jihene; Zouch, Wassim; Slima, Mohamed Ben; Kammoun, Ines; Hamida, Ahmed Ben
2015-11-01
Electroencephalography (EEG) and magnetic resonance imaging (MRI) are noninvasive neuroimaging modalities. They are widely used and could be complementary. The fusion of these modalities may enhance some emerging research fields targeting the exploration better brain activities. Such research attracted various scientific investigators especially to provide a convivial and helpful advanced clinical-aid tool enabling better neurological explorations. Our present research was, in fact, in the context of EEG inverse problem resolution and investigated an advanced estimation methodology for the localization of the cerebral activity. Our focus was, therefore, on the integration of temporal priors to low-resolution brain electromagnetic tomography (LORETA) formalism and to solve the inverse problem in the EEG. The main idea behind our proposed method was in the integration of a temporal projection matrix within the LORETA weighting matrix. A hyperparameter is the principal fact for such a temporal integration, and its importance would be obvious when obtaining a regularized smoothness solution. Our experimental results clearly confirmed the impact of such an optimization procedure adopted for the temporal regularization parameter comparatively to the LORETA method.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.
Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Review on solving the forward problem in EEG source analysis
Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace
2007-01-01
Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
[EEG source localization using LORETA (low resolution electromagnetic tomography)].
Puskás, Szilvia
2011-03-30
Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.
MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data.
Aine, C J; Sanfratello, L; Ranken, D; Best, E; MacArthur, J A; Wallace, T; Gilliam, K; Donahue, C H; Montaño, R; Bryant, J E; Scott, A; Stephen, J M
2012-04-01
MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes ( http://cobre.mrn.org/megsim/ ). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis.
MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data
Aine, C. J.; Sanfratello, L.; Ranken, D.; Best, E.; MacArthur, J. A.; Wallace, T.; Gilliam, K.; Donahue, C. H.; Montaño, R.; Bryant, J. E.; Scott, A.; Stephen, J. M.
2012-01-01
MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes (http://cobre.mrn.org/megsim/). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis. PMID:22068921
Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre
2013-01-01
The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
EEG-distributed inverse solutions for a spherical head model
NASA Astrophysics Data System (ADS)
Riera, J. J.; Fuentes, M. E.; Valdés, P. A.; Ohárriz, Y.
1998-08-01
The theoretical study of the minimum norm solution to the MEG inverse problem has been carried out in previous papers for the particular case of spherical symmetry. However, a similar study for the EEG is remarkably more difficult due to the very complicated nature of the expression relating the voltage differences on the scalp to the primary current density (PCD) even for this simple symmetry. This paper introduces the use of the electric lead field (ELF) on the dyadic formalism in the spherical coordinate system to overcome such a drawback using an expansion of the ELF in terms of longitudinal and orthogonal vector fields. This approach allows us to represent EEG Fourier coefficients on a 2-sphere in terms of a current multipole expansion. The choice of a suitable basis for the Hilbert space of the PCDs on the brain region allows the current multipole moments to be related by spatial transfer functions to the PCD spectral coefficients. Properties of the most used distributed inverse solutions are explored on the basis of these results. Also, a part of the ELF null space is completely characterized and those spherical components of the PCD which are possible silent candidates are discussed.
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
High-resolution EEG (HR-EEG) and magnetoencephalography (MEG).
Gavaret, M; Maillard, L; Jung, J
2015-03-01
High-resolution EEG (HR-EEG) and magnetoencephalography (MEG) allow the recording of spontaneous or evoked electromagnetic brain activity with excellent temporal resolution. Data must be recorded with high temporal resolution (sampling rate) and high spatial resolution (number of channels). Data analyses are based on several steps with selection of electromagnetic signals, elaboration of a head model and use of algorithms in order to solve the inverse problem. Due to considerable technical advances in spatial resolution, these tools now represent real methods of ElectroMagnetic Source Imaging. HR-EEG and MEG constitute non-invasive and complementary examinations, characterized by distinct sensitivities according to the location and orientation of intracerebral generators. In the presurgical assessment of drug-resistant partial epilepsies, HR-EEG and MEG can characterize and localize interictal activities and thus the irritative zone. HR-EEG and MEG often yield significant additional data that are complementary to other presurgical investigations and particularly relevant in MRI-negative cases. Currently, the determination of the epileptogenic zone and functional brain mapping remain rather less well-validated indications. In France, in 2014, HR-EEG is now part of standard clinical investigation of epilepsy, while MEG remains a research technique. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Zwoliński, Piotr; Roszkowski, Marcin; Zygierewicz, Jaroslaw; Haufe, Stefan; Nolte, Guido; Durka, Piotr J
2010-12-01
This paper introduces a freely accessible database http://eeg.pl/epi , containing 23 datasets from patients diagnosed with and operated on for drug-resistant epilepsy. This was collected as part of the clinical routine at the Warsaw Memorial Child Hospital. Each record contains (1) pre-surgical electroencephalography (EEG) recording (10-20 system) with inter-ictal discharges marked separately by an expert, (2) a full set of magnetic resonance imaging (MRI) scans for calculations of the realistic forward models, (3) structural placement of the epileptogenic zone, recognized by electrocorticography (ECoG) and post-surgical results, plotted on pre-surgical MRI scans in transverse, sagittal and coronal projections, (4) brief clinical description of each case. The main goal of this project is evaluation of possible improvements of localization of epileptic foci from the surface EEG recordings. These datasets offer a unique possibility for evaluating different EEG inverse solutions. We present preliminary results from a subset of these cases, including comparison of different schemes for the EEG inverse solution and preprocessing. We report also a finding which relates to the selective parametrization of single waveforms by multivariate matching pursuit, which is used in the preprocessing for the inverse solutions. It seems to offer a possibility of tracing the spatial evolution of seizures in time.
Astolfi, Laura; Vecchiato, Giovanni; De Vico Fallani, Fabrizio; Salinari, Serenella; Cincotti, Febo; Aloise, Fabio; Mattia, Donatella; Marciani, Maria Grazia; Bianchi, Luigi; Soranzo, Ramon; Babiloni, Fabio
2009-01-01
We estimate cortical activity in normal subjects during the observation of TV commercials inserted within a movie by using high-resolution EEG techniques. The brain activity was evaluated in both time and frequency domains by solving the associate inverse problem of EEG with the use of realistic head models. In particular, we recover statistically significant information about cortical areas engaged by particular scenes inserted within the TV commercial proposed with respect to the brain activity estimated while watching a documentary. Results obtained in the population investigated suggest that the statistically significant brain activity during the observation of the TV commercial was mainly concentrated in frontoparietal cortical areas, roughly coincident with the Brodmann areas 8, 9, and 7, in the analyzed population. PMID:19584910
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
Arjunan, Sridhar P; Kumar, Dinesh K; Jung, Tzyy-Ping
2009-01-01
Loss of alertness can have dire consequences for people controlling motorized equipment or for people in professions such as defense. Electroencephalogram (EEG) is known to be related to alertness of the person, but due to high level of noise and low signal strength, the use of EEG for such applications has been considered to be unreliable. This study reports the fractal analysis of EEG and identifies the use of maximum fractal length (MFL) as a feature that is inversely correlated with the alertness of the subject. The results show that MFL (of only single channel of EEG) indicates the loss of alertness of the individual with mean (inverse) correlation coefficient = 0.82.
Functional connectivity analysis in EEG source space: The choice of method
Knyazeva, Maria G.
2017-01-01
Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Inverse scattering approach to improving pattern recognition
NASA Astrophysics Data System (ADS)
Chapline, George; Fu, Chi-Yung
2005-05-01
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.
Inverse Scattering Approach to Improving Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapline, G; Fu, C
2005-02-15
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensorymore » feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.« less
Ictal EEG fractal dimension in ECT predicts outcome at 2 weeks in schizophrenia.
Abhishekh, Hulegar A; Thirthalli, Jagadisha; Manjegowda, Anusha; Phutane, Vivek H; Muralidharan, Kesavan; Gangadhar, Bangalore N
2013-09-30
Studies of electroconvulsive therapy (ECT) have found an association between ictal electroencephalographic (EEG) measures and clinical outcome in depression. Such studies are lacking in schizophrenia. Consenting schizophrenia patients receiving ECT were assessed using the Brief Psychiatric Rating Scale (BPRS) before and 2 weeks after the start of ECT. The patients' seizure was monitored using EEG. In 26 patients, completely artifact-free EEG derived from the left frontal-pole (FP1) channel and electrocardiography (ECG) were available. The fractal dimension (FD) was computed to assess 4-s EEG epochs, and the maximal value from the earliest ECT session (2nd, 3rd or 4th) was used for analysis. There was a significant inverse correlation between the maximum FD and the total score following 6th ECT. An inverse Inverse correlation was also observed between the maximum FD and the total number of ECTs administered as well as the maximum heart rate (HR) and BPRS scores following 6th ECT. In patients with schizophrenia greater intensity of seizures (higher FD) during initial sessions of ECT is associated with better response at the end of 2 weeks. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Optimal Design for Parameter Estimation in EEG Problems in a 3D Multilayered Domain
2014-03-30
dipole, C(x) = q δ(x − rq), where δ is the Dirac distribution, rq is a fixed point in the brain which represents the dipole location, and q is the dipole...again based on the formulations discussed above, we consider a function F of the form F (x, θ) = qδ(x− rq), where δ denotes the dirac distribution...Inverse Problems, 12, (1996), 565–577. [5] H.T. Banks, M.W. Buksas and T. Lin, Electromagnetic Material Interrogation Using Conductive Inter- faces and
Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane
2016-01-01
Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.
EEG phase reset due to auditory attention: an inverse time-scale approach.
Low, Yin Fen; Strauss, Daniel J
2009-08-01
We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6-10 Hz, termed as theta-alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta-alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S
2012-03-01
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.
Hamid, Laith; Al Farawn, Ali; Merlet, Isabelle; Japaridze, Natia; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Wendling, Fabrice; Siniatchkin, Michael
2017-07-01
The clinical routine of non-invasive electroencephalography (EEG) is usually performed with 8-40 electrodes, especially in long-term monitoring, infants or emergency care. There is a need in clinical and scientific brain imaging to develop inverse solution methods that can reconstruct brain sources from these low-density EEG recordings. In this proof-of-principle paper we investigate the performance of the spatiotemporal Kalman filter (STKF) in EEG source reconstruction with 9-, 19- and 32- electrodes. We used simulated EEG data of epileptic spikes generated from lateral frontal and lateral temporal brain sources using state-of-the-art neuronal population models. For validation of source reconstruction, we compared STKF results to the location of the simulated source and to the results of low-resolution brain electromagnetic tomography (LORETA) standard inverse solution. STKF consistently showed less localization bias compared to LORETA, especially when the number of electrodes was decreased. The results encourage further research into the application of the STKF in source reconstruction of brain activity from low-density EEG recordings.
ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces.
Ivannikov, Andriy; Kalyakin, Igor; Hämäläinen, Jarmo; Leppänen, Paavo H T; Ristaniemi, Tapani; Lyytinen, Heikki; Kärkkäinen, Tommi
2009-06-15
In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The interpretation of the results and the performance of the proposed method under conditions, when the basic assumptions are violated - e.g. the problem is underdetermined - are also discussed. Moreover, we study how the factors of the number of channels and trials used by the method influence the effectiveness of ERP/noise subspaces separation. In addition, we explore also the impact of different data resampling strategies on the performance of the considered algorithm. The results can help in determining the optimal parameters of the equipment/methods used to elicit and reliably estimate ERPs.
EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome.
Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice
2015-01-01
The brain is a large-scale complex network often referred to as the "connectome". Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.
EEGNET: An Open Source Tool for Analyzing and Visualizing M/EEG Connectome
Hassan, Mahmoud; Shamas, Mohamad; Khalil, Mohamad; El Falou, Wassim; Wendling, Fabrice
2015-01-01
The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/. PMID:26379232
Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.
Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle
2011-05-01
We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.
L1 norm based common spatial patterns decomposition for scalp EEG BCI.
Li, Peiyang; Xu, Peng; Zhang, Rui; Guo, Lanjin; Yao, Dezhong
2013-08-06
Brain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc. In this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance. The results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP. By combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings.
Math anxiety: Brain cortical network changes in anticipation of doing mathematics.
Klados, Manousos A; Pandria, Niki; Micheloyannis, Sifis; Margulies, Daniel; Bamidis, Panagiotis D
2017-12-01
Following our previous work regarding the involvement of math anxiety (MA) in math-oriented tasks, this study tries to explore the differences in the cerebral networks' topology between self-reported low math-anxious (LMA) and high math-anxious (HMA) individuals, during the anticipation phase prior to a mathematical related experiment. For this reason, multichannel EEG recordings were adopted, while the solution of the inverse problem was applied in a generic head model, in order to obtain the cortical signals. The cortical networks have been computed for each band separately, using the magnitude square coherence metric. The main graph theoretical parameters, showed differences in segregation and integration in almost all EEG bands of the HMAs in comparison to LMAs, indicative of a great influence of the anticipatory anxiety prior to mathematical performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Lin, Pei-Feng; Lo, Men-Tzung; Tsao, Jenho; Chang, Yi-Chung; Lin, Chen; Ho, Yi-Lwun
2014-01-01
The heart begins to beat before the brain is formed. Whether conventional hierarchical central commands sent by the brain to the heart alone explain all the interplay between these two organs should be reconsidered. Here, we demonstrate correlations between the signal complexity of brain and cardiac activity. Eighty-seven geriatric outpatients with healthy hearts and varied cognitive abilities each provided a 24-hour electrocardiography (ECG) and a 19-channel eye-closed routine electroencephalography (EEG). Multiscale entropy (MSE) analysis was applied to three epochs (resting-awake state, photic stimulation of fast frequencies (fast-PS), and photic stimulation of slow frequencies (slow-PS)) of EEG in the 1–58 Hz frequency range, and three RR interval (RRI) time series (awake-state, sleep and that concomitant with the EEG) for each subject. The low-to-high frequency power (LF/HF) ratio of RRI was calculated to represent sympatho-vagal balance. With statistics after Bonferroni corrections, we found that: (a) the summed MSE value on coarse scales of the awake RRI (scales 11–20, RRI-MSE-coarse) were inversely correlated with the summed MSE value on coarse scales of the resting-awake EEG (scales 6–20, EEG-MSE-coarse) at Fp2, C4, T6 and T4; (b) the awake RRI-MSE-coarse was inversely correlated with the fast-PS EEG-MSE-coarse at O1, O2 and C4; (c) the sleep RRI-MSE-coarse was inversely correlated with the slow-PS EEG-MSE-coarse at Fp2; (d) the RRI-MSE-coarse and LF/HF ratio of the awake RRI were correlated positively to each other; (e) the EEG-MSE-coarse at F8 was proportional to the cognitive test score; (f) the results conform to the cholinergic hypothesis which states that cognitive impairment causes reduction in vagal cardiac modulation; (g) fast-PS significantly lowered the EEG-MSE-coarse globally. Whether these heart-brain correlations could be fully explained by the central autonomic network is unknown and needs further exploration. PMID:24498375
Infant phantom head circuit board for EEG head phantom and pediatric brain simulation
NASA Astrophysics Data System (ADS)
Almohsen, Safa
The infant's skull differs from an adult skull because of the characteristic features of the human skull during early development. The fontanels and the conductivity of the infant skull influence surface currents, generated by neurons, which underlie electroencephalography (EEG) signals. An electric circuit was built to power a set of simulated neural sources for an infant brain activity simulator. Also, in the simulator, three phantom tissues were created using saline solution plus Agarose gel to mimic the conductivity of each layer in the head [scalp, skull brain]. The conductivity measurement was accomplished by two different techniques: using the four points' measurement technique, and a conductivity meter. Test results showed that the optimized phantom tissues had appropriate conductivities to simulate each tissue layer to fabricate a physical head phantom. In this case, the best results should be achieved by testing the electrical neural circuit with the sample physical model to generate simulated EEG data and use that to solve both the forward and the inverse problems for the purpose of localizing the neural sources in the head phantom.
Gastroesophageal Reflux in Neurologically Impaired Children: What Are the Risk Factors?
Kim, Seung; Koh, Hong; Lee, Joon Soo
2017-03-15
Neurologically impaired patients frequently suffer from gastrointestinal tract problems, such as gastroesophageal reflux disease (GERD). In this study, we aimed to define the risk factors for GERD in neurologically impaired children. From May 2006 to March 2014, 101 neurologically impaired children who received 24-hour esophageal pH monitoring at Severance Children's Hospital were enrolled in the study. The esophageal pH finding and the clinical characteristics of the patients were analyzed. The reflux index was higher in patients with abnormal electroencephalography (EEG) results than in those with normal EEG results (p=0.027). Mitochondrial disease was associated with a higher reflux index than were epileptic disorders or cerebral palsy (p=0.009). Patient gender, feeding method, scoliosis, tracheostomy, and baclofen use did not lead to statistical differences in reflux index. Age of onset of neurological impairment was inversely correlated with DeMeester score and reflux index. Age at the time of examination, the duration of the disease, and the number of antiepileptic drugs were not correlated with GER severity. Early-onset neurological impairment, abnormal EEG results, and mitochondrial disease are risk factors for severe GERD.
A variational Bayes spatiotemporal model for electromagnetic brain mapping.
Nathoo, F S; Babul, A; Moiseev, A; Virji-Babul, N; Beg, M F
2014-03-01
In this article, we present a new variational Bayes approach for solving the neuroelectromagnetic inverse problem arising in studies involving electroencephalography (EEG) and magnetoencephalography (MEG). This high-dimensional spatiotemporal estimation problem involves the recovery of time-varying neural activity at a large number of locations within the brain, from electromagnetic signals recorded at a relatively small number of external locations on or near the scalp. Framing this problem within the context of spatial variable selection for an underdetermined functional linear model, we propose a spatial mixture formulation where the profile of electrical activity within the brain is represented through location-specific spike-and-slab priors based on a spatial logistic specification. The prior specification accommodates spatial clustering in brain activation, while also allowing for the inclusion of auxiliary information derived from alternative imaging modalities, such as functional magnetic resonance imaging (fMRI). We develop a variational Bayes approach for computing estimates of neural source activity, and incorporate a nonparametric bootstrap for interval estimation. The proposed methodology is compared with several alternative approaches through simulation studies, and is applied to the analysis of a multimodal neuroimaging study examining the neural response to face perception using EEG, MEG, and fMRI. © 2013, The International Biometric Society.
The inverse problem in electroencephalography using the bidomain model of electrical activity.
Lopez Rincon, Alejandro; Shimoda, Shingo
2016-12-01
Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Erem, B; Hyde, D E; Peters, J M; Duffy, F H; Brooks, D H; Warfield, S K
2015-04-01
The dynamical structure of the brain's electrical signals contains valuable information about its physiology. Here we combine techniques for nonlinear dynamical analysis and manifold identification to reveal complex and recurrent dynamics in interictal epileptiform discharges (IEDs). Our results suggest that recurrent IEDs exhibit some consistent dynamics, which may only last briefly, and so individual IED dynamics may need to be considered in order to understand their genesis. This could potentially serve to constrain the dynamics of the inverse source localization problem.
Arjunan, Sridhar P; Kumar, Dinesh K; Jung, Tzyy-Ping
2010-01-01
Changes in alertness levels can have dire consequences for people operating and controlling motorized equipment. Past research studies have shown the relationship of Electroencephalogram (EEG) with alertness of the person. This research reports the fractal analysis of EEG and estimation of the alertness levels of the individual based on the changes in the maximum fractal length (MFL) of EEG. The results indicate that MFL of only 2 channels of EEG can be used to identify the loss of alertness of the individual with mean (inverse) correlation coefficient = 0.82. This study has also reported that using the changes in MFL of EEG, the changes in alertness level of a person was estimated with a mean correlation coefficient = 0.69.
Forward Field Computation with OpenMEEG
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2011-01-01
To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231
A simple method for EEG guided transcranial electrical stimulation without models.
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom
2016-06-01
There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
A simple method for EEG guided transcranial electrical stimulation without models
NASA Astrophysics Data System (ADS)
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom
2016-06-01
Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Localization from near-source quasi-static electromagnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, John Compton
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less
Electroencephalography in ellipsoidal geometry with fourth-order harmonics.
Alcocer-Sosa, M; Gutierrez, D
2016-08-01
We present a solution to the electroencephalographs (EEG) forward problem of computing the scalp electric potentials for the case when the head's geometry is modeled using a four-shell ellipsoidal geometry and the brain sources with an equivalent current dipole (ECD). The proposed solution includes terms up to the fourth-order ellipsoidal harmonics and we compare this new approximation against those that only considered up to second- and third-order harmonics. Our comparisons use as reference a solution in which a tessellated volume approximates the head and the forward problem is solved through the boundary element method (BEM). We also assess the solution to the inverse problem of estimating the magnitude of an ECD through different harmonic approximations. Our results show that the fourth-order solution provides a better estimate of the ECD in comparison to lesser order ones.
Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo
2016-09-01
The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.
EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes
ERIC Educational Resources Information Center
Beal, Carole R.; Galan, Federico Cirett
2012-01-01
In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…
Riera, J; Aubert, E; Iwata, K; Kawashima, R; Wan, X; Ozaki, T
2005-01-01
The elucidation of the complex machinery used by the human brain to segregate and integrate information while performing high cognitive functions is a subject of imminent future consequences. The most significant contributions to date in this field, known as cognitive neuroscience, have been achieved by using innovative neuroimaging techniques, such as electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), which measure variations in both the time and the space of some interpretable physical magnitudes. Extraordinary maps of cerebral activation involving function-restricted brain areas, as well as graphs of the functional connectivity between them, have been obtained from EEG and fMRI data by solving some spatio-temporal inverse problems, which constitutes a top-down approach. However, in many cases, a natural bridge between these maps/graphs and the causal physiological processes is lacking, leading to some misunderstandings in their interpretation. Recent advances in the comprehension of the underlying physiological mechanisms associated with different cerebral scales have provided researchers with an excellent scenario to develop sophisticated biophysical models that permit an integration of these neuroimage modalities, which must share a common aetiology. This paper proposes a bottom-up approach, involving physiological parameters in a specific mesoscopic dynamic equations system. Further observation equations encapsulating the relationship between the mesostates and the EEG/fMRI data are obtained on the basis of the physical foundations of these techniques. A methodology for the estimation of parameters from fused EEG/fMRI data is also presented. In this context, the concepts of activation and effective connectivity are carefully revised. This new approach permits us to examine and discuss some future prospects for the integration of multimodal neuroimages. PMID:16087446
Irimia, Andrei; Goh, S.-Y. Matthew; Torgerson, Carinna M.; Stein, Nathan R.; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.
2013-01-01
Objective To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Methods Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. Results We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Conclusion Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. PMID:24011495
Irimia, Andrei; Goh, S-Y Matthew; Torgerson, Carinna M; Stein, Nathan R; Chambers, Micah C; Vespa, Paul M; Van Horn, John D
2013-10-01
To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. Published by Elsevier B.V.
Papadelis, Christos; Tamilia, Eleonora; Stufflebeam, Steven; Grant, Patricia E.; Madsen, Joseph R.; Pearl, Phillip L.; Tanaka, Naoaki
2016-01-01
Crucial to the success of epilepsy surgery is the availability of a robust biomarker that identifies the Epileptogenic Zone (EZ). High Frequency Oscillations (HFOs) have emerged as potential presurgical biomarkers for the identification of the EZ in addition to Interictal Epileptiform Discharges (IEDs) and ictal activity. Although they are promising to localize the EZ, they are not yet suited for the diagnosis or monitoring of epilepsy in clinical practice. Primary barriers remain: the lack of a formal and global definition for HFOs; the consequent heterogeneity of methodological approaches used for their study; and the practical difficulties to detect and localize them noninvasively from scalp recordings. Here, we present a methodology for the recording, detection, and localization of interictal HFOs from pediatric patients with refractory epilepsy. We report representative data of HFOs detected noninvasively from interictal scalp EEG and MEG from two children undergoing surgery. The underlying generators of HFOs were localized by solving the inverse problem and their localization was compared to the Seizure Onset Zone (SOZ) as this was defined by the epileptologists. For both patients, Interictal Epileptogenic Discharges (IEDs) and HFOs were localized with source imaging at concordant locations. For one patient, intracranial EEG (iEEG) data were also available. For this patient, we found that the HFOs localization was concordant between noninvasive and invasive methods. The comparison of iEEG with the results from scalp recordings served to validate these findings. To our best knowledge, this is the first study that presents the source localization of scalp HFOs from simultaneous EEG and MEG recordings comparing the results with invasive recordings. These findings suggest that HFOs can be reliably detected and localized noninvasively with scalp EEG and MEG. We conclude that the noninvasive localization of interictal HFOs could significantly improve the presurgical evaluation for pediatric patients with epilepsy. PMID:28060325
Multireference adaptive noise canceling applied to the EEG.
James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J
1997-08-01
The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.
Jang, Kuk-In; Shim, Miseon; Lee, Sang Min; Huh, Hyu Jung; Huh, Seung; Joo, Ji-Young; Lee, Seung-Hwan; Chae, Jeong-Ho
2017-11-01
The Sewol ferry capsizing accident on South Korea's southern coast resulted in the death of 304 people, and serious bereavement problems for their families. Electroencephalography (EEG) beta frequency is associated with psychiatric symptoms, such as insomnia. The aim of this study was to investigate the relation between frontal beta power, psychological symptoms, and insomnia in the bereaved families. Eighty-four family members of the Sewol ferry victims (32 men and 52 women) were recruited and their EEG was compared with that of 25 (13 men and 12 women) healthy controls. A two-channel EEG device was used to measure cortical activity in the frontal lobe. Symptom severity of insomnia, post-traumatic stress disorder, complicated grief, and anxiety were evaluated. The bereaved families showed a higher frontal beta power than healthy controls. Subgroup analysis showed that frontal beta power was lower in the individuals with severe insomnia than in those with normal sleep. There was a significant inverse correlation between frontal beta power and insomnia symptom in the bereaved families. This study suggests that increased beta power, reflecting the psychopathology in the bereaved families of the Sewol ferry disaster, may be a compensatory mechanism that follows complex trauma. Frontal beta power could be a potential marker indicating the severity of sleep disturbances. Our results suggest that sleep disturbance is an important symptom in family members of the Sewol ferry disaster's victims, which may be screened by EEG beta power. © 2017 The Authors. Psychiatry and Clinical Neurosciences © 2017 Japanese Society of Psychiatry and Neurology.
Sparsity enables estimation of both subcortical and cortical activity from MEG and EEG
Krishnaswamy, Pavitra; Obregon-Henao, Gabriel; Ahveninen, Jyrki; Khan, Sheraz; Iglesias, Juan Eugenio; Hämäläinen, Matti S.; Purdon, Patrick L.
2017-01-01
Subcortical structures play a critical role in brain function. However, options for assessing electrophysiological activity in these structures are limited. Electromagnetic fields generated by neuronal activity in subcortical structures can be recorded noninvasively, using magnetoencephalography (MEG) and electroencephalography (EEG). However, these subcortical signals are much weaker than those generated by cortical activity. In addition, we show here that it is difficult to resolve subcortical sources because distributed cortical activity can explain the MEG and EEG patterns generated by deep sources. We then demonstrate that if the cortical activity is spatially sparse, both cortical and subcortical sources can be resolved with M/EEG. Building on this insight, we develop a hierarchical sparse inverse solution for M/EEG. We assess the performance of this algorithm on realistic simulations and auditory evoked response data, and show that thalamic and brainstem sources can be correctly estimated in the presence of cortical activity. Our work provides alternative perspectives and tools for characterizing electrophysiological activity in subcortical structures in the human brain. PMID:29138310
NASA Astrophysics Data System (ADS)
Vergallo, P.; Lay-Ekuakille, A.
2013-08-01
Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.
Electroencephalogram (EEG) (For Parents)
... Most EEGs are done to diagnose and monitor seizure disorders. EEGs also can identify causes of other problems, ... are very safe. If your child has a seizure disorder, your doctor might want to stimulate and record ...
Combining EEG and eye movement recording in free viewing: Pitfalls and possibilities.
Nikolaev, Andrey R; Meghanathan, Radha Nila; van Leeuwen, Cees
2016-08-01
Co-registration of EEG and eye movement has promise for investigating perceptual processes in free viewing conditions, provided certain methodological challenges can be addressed. Most of these arise from the self-paced character of eye movements in free viewing conditions. Successive eye movements occur within short time intervals. Their evoked activity is likely to distort the EEG signal during fixation. Due to the non-uniform distribution of fixation durations, these distortions are systematic, survive across-trials averaging, and can become a source of confounding. We illustrate this problem with effects of sequential eye movements on the evoked potentials and time-frequency components of EEG and propose a solution based on matching of eye movement characteristics between experimental conditions. The proposal leads to a discussion of which eye movement characteristics are to be matched, depending on the EEG activity of interest. We also compare segmentation of EEG into saccade-related epochs relative to saccade and fixation onsets and discuss the problem of baseline selection and its solution. Further recommendations are given for implementing EEG-eye movement co-registration in free viewing conditions. By resolving some of the methodological problems involved, we aim to facilitate the transition from the traditional stimulus-response paradigm to the study of visual perception in more naturalistic conditions. Copyright © 2016 Elsevier Inc. All rights reserved.
Working memory performance inversely predicts spontaneous delta and theta-band scaling relations.
Euler, Matthew J; Wiltshire, Travis J; Niermeyer, Madison A; Butner, Jonathan E
2016-04-15
Electrophysiological studies have strongly implicated theta-band activity in human working memory processes. Concurrently, work on spontaneous, non-task-related oscillations has revealed the presence of long-range temporal correlations (LRTCs) within sub-bands of the ongoing EEG, and has begun to demonstrate their functional significance. However, few studies have yet assessed the relation of LRTCs (also called scaling relations) to individual differences in cognitive abilities. The present study addressed the intersection of these two literatures by investigating the relation of narrow-band EEG scaling relations to individual differences in working memory ability, with a particular focus on the theta band. Fifty-four healthy adults completed standardized assessments of working memory and separate recordings of their spontaneous, non-task-related EEG. Scaling relations were quantified in each of the five classical EEG frequency bands via the estimation of the Hurst exponent obtained from detrended fluctuation analysis. A multilevel modeling framework was used to characterize the relation of working memory performance to scaling relations as a function of general scalp location in Cartesian space. Overall, results indicated an inverse relationship between both delta and theta scaling relations and working memory ability, which was most prominent at posterior sensors, and was independent of either spatial or individual variability in band-specific power. These findings add to the growing literature demonstrating the relevance of neural LRTCs for understanding brain functioning, and support a construct- and state-dependent view of their functional implications. Copyright © 2016 Elsevier B.V. All rights reserved.
Forward and inverse effects of the complete electrode model in neonatal EEG
Lew, S.; Wolters, C. H.
2016-01-01
This paper investigates finite element method-based modeling in the context of neonatal electroencephalography (EEG). In particular, the focus lies on electrode boundary conditions. We compare the complete electrode model (CEM) with the point electrode model (PEM), which is the current standard in EEG. In the CEM, the voltage experienced by an electrode is modeled more realistically as the integral average of the potential distribution over its contact surface, whereas the PEM relies on a point value. Consequently, the CEM takes into account the subelectrode shunting currents, which are absent in the PEM. In this study, we aim to find out how the electrode voltage predicted by these two models differ, if standard size electrodes are attached to a head of a neonate. Additionally, we study voltages and voltage variation on electrode surfaces with two source locations: 1) next to the C6 electrode and 2) directly under the Fz electrode and the frontal fontanel. A realistic model of a neonatal head, including a skull with fontanels and sutures, is used. Based on the results, the forward simulation differences between CEM and PEM are in general small, but significant outliers can occur in the vicinity of the electrodes. The CEM can be considered as an integral part of the outer head model. The outcome of this study helps understanding volume conduction of neonatal EEG, since it enlightens the role of advanced skull and electrode modeling in forward and inverse computations. NEW & NOTEWORTHY The effect of the complete electrode model on electroencephalography forward and inverse computations is explored. A realistic neonatal head model, including a skull structure with fontanels and sutures, is used. The electrode and skull modeling differences are analyzed and compared with each other. The results suggest that the complete electrode model can be considered as an integral part of the outer head model. To achieve optimal source localization results, accurate electrode modeling might be necessary. PMID:27852731
Kozunov, Vladimir V.; Ossadtchi, Alexei
2015-01-01
Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses. PMID:25954141
Kovalev, G I; Vorob'ev, V V
2002-01-01
Participation of the non-NMDA glutamate receptor subtype in the formation of the EEG frequency spectrum was studied in wakeful rats upon a long-term (10 x 0.2 mg/kg, s.c.) administration of the nootropic dipeptide GVS-111 (noopept or N-phenylacetyl-L-prolyglycine ethylate). The EEGs were measured with electrodes implanted into somatosensor cortex regions, hippocampus, and a cannula in the lateral ventricle. The acute reactions (characteristic of nootropes) in the alpha and beta ranges of EEG exhibited inversion after the 6th injection of noopept and almost completely vanished after the 9th injection. Preliminary introduction of the non-NMDA antagonist GDEE (glutamic acid diethyl ester) in a dose of 1 mumole into the lateral ventricle restored the EEG pattern observed upon the 6th dose of GVS-111. The role of glutamate receptors in the course of a prolonged administration of nootropes, as well as the possible mechanisms accounting for a difference in the action of GVS-111 and piracetam are discussed.
Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai
2005-10-01
This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.
NASA Astrophysics Data System (ADS)
Fujiwara, Kosuke; Oogane, Mikihiko; Kanno, Akitake; Imada, Masahiro; Jono, Junichi; Terauchi, Takashi; Okuno, Tetsuo; Aritomi, Yuuji; Morikawa, Masahiro; Tsuchida, Masaaki; Nakasato, Nobukazu; Ando, Yasuo
2018-02-01
Magnetocardiography (MCG) and magnetoencephalography (MEG) signals were detected at room temperature using tunnel magneto-resistance (TMR) sensors. TMR sensors developed with low-noise amplifier circuits detected the MCG R wave without averaging, and the QRS complex was clearly observed with averaging at a high signal-to-noise ratio. Spatial mapping of the MCG was also achieved. Averaging of MEG signals triggered by electroencephalography (EEG) clearly observed the phase inversion of the alpha rhythm with a correlation coefficient as high as 0.7 between EEG and MEG.
Chen, Nan; Bell, Martha Ann; Deater-Deckard, Kirby
2016-01-01
Frontal EEG asymmetry is associated with individual differences in positive/negative emotionality and approach/avoidance tendencies. The current study examined the moderating role of maternal resting frontal EEG asymmetry on the link between child behavior problems and maternal harsh parenting, within the context of differing degrees of chronic family stressors (father unemployment, single parenthood, caring for multiple children, and household chaos). The sample included 121 mother-child pairs. Results showed that stressors and frontal EEG asymmetry together moderated the link. Child problem behaviors were moderately associated with greater maternal negativity for mothers with right frontal asymmetry, or mothers who experienced more stressors. However, no association existed between child behavior problems and maternal negativity for mothers with few stressors and left frontal asymmetry. The findings implicate transactions between household stress and a psychophysiological indicator of maternal emotional reactivity and mothers’ approach/avoidance tendencies, in the etiology of parental negativity toward challenging child behaviors. PMID:27853348
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Filtration of human EEG recordings from physiological artifacts with empirical mode method
NASA Astrophysics Data System (ADS)
Grubov, Vadim V.; Runnova, Anastasiya E.; Khramova, Marina V.
2017-03-01
In the paper we propose the new method for dealing with noise and physiological artifacts in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We consider noises and physiological artifacts on EEG as specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from eye-moving artifacts and show high efficiency of the method.
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
EEG Markers for Attention Deficit Disorder: Pharmacological and Neurofeedback Applications.
ERIC Educational Resources Information Center
Sterman, M. Barry
2000-01-01
Examined contribution of EEG findings in the classification and treatment of attention deficit and related behavioral problems in children. Found that quantitative EEG methods disclosed patterns of abnormality in children with ADD, suggested improved guidelines for pharmacological treatment, and introduced neurofeedback, a behavioral treatment for…
Frontal-posterior coherence and cognitive function in older adults.
Fleck, Jessica I; Kuti, Julia; Brown, Jessica; Mahon, Jessica R; Gayda-Chelder, Christine
2016-12-01
The reliable measurement of brain health and cognitive function is essential in mitigating the negative effects associated with cognitive decline through early and accurate diagnosis of change. The present research explored the relationship between EEG coherence for electrodes within frontal and posterior regions, as well as coherence between frontal and posterior electrodes and performance on standard neuropsychological measures of memory and executive function. EEG coherence for eyes-closed resting-state EEG activity was calculated for delta, theta, alpha, beta, and gamma frequency bands. Participants (N=66; mean age=67.15years) had their resting-state EEGs recorded and completed a neuropsychological battery that assessed memory and executive function, two cognitive domains that are significantly affected during aging. A positive relationship was observed between coherence within the frontal region and performance on measures of memory and executive function for delta and beta frequency bands. In addition, an inverse relationship was observed for coherence between frontal and posterior electrode pairs, particularly within the theta frequency band, and performance on Digit Span Sequencing, a measure of working memory. The present research supports a more substantial link between EEG coherence, rather than spectral power, and cognitive function. Continued study in this area may enable EEG to be applied broadly as a diagnostic measure of cognitive ability. Copyright © 2016 Elsevier B.V. All rights reserved.
Dealing with noise and physiological artifacts in human EEG recordings: empirical mode methods
NASA Astrophysics Data System (ADS)
Runnova, Anastasiya E.; Grubov, Vadim V.; Khramova, Marina V.; Hramov, Alexander E.
2017-04-01
In the paper we propose the new method for removing noise and physiological artifacts in human EEG recordings based on empirical mode decomposition (Hilbert-Huang transform). As physiological artifacts we consider specific oscillatory patterns that cause problems during EEG analysis and can be detected with additional signals recorded simultaneously with EEG (ECG, EMG, EOG, etc.) We introduce the algorithm of the proposed method with steps including empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing these empirical modes and reconstructing of initial EEG signal. We show the efficiency of the method on the example of filtration of human EEG signal from eye-moving artifacts.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
Dynamics of large-scale brain activity in normal arousal states and epileptic seizures
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Rennie, C. J.; Rowe, D. L.
2002-04-01
Links between electroencephalograms (EEGs) and underlying aspects of neurophysiology and anatomy are poorly understood. Here a nonlinear continuum model of large-scale brain electrical activity is used to analyze arousal states and their stability and nonlinear dynamics for physiologically realistic parameters. A simple ordered arousal sequence in a reduced parameter space is inferred and found to be consistent with experimentally determined parameters of waking states. Instabilities arise at spectral peaks of the major clinically observed EEG rhythms-mainly slow wave, delta, theta, alpha, and sleep spindle-with each instability zone lying near its most common experimental precursor arousal states in the reduced space. Theta, alpha, and spindle instabilities evolve toward low-dimensional nonlinear limit cycles that correspond closely to EEGs of petit mal seizures for theta instability, and grand mal seizures for the other types. Nonlinear stimulus-induced entrainment and seizures are also seen, EEG spectra and potentials evoked by stimuli are reproduced, and numerous other points of experimental agreement are found. Inverse modeling enables physiological parameters underlying observed EEGs to be determined by a new, noninvasive route. This model thus provides a single, powerful framework for quantitative understanding of a wide variety of brain phenomena.
Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D
2014-01-01
The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.
A mesostate-space model for EEG and MEG.
Daunizeau, Jean; Friston, Karl J
2007-10-15
We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.
... injuries Infections Tumors EEG is also used to: Evaluate problems with sleep ( sleep disorders ) Monitor the brain ... Tissue death due to a blockage in blood flow (cerebral infarction) Drug or alcohol abuse Head injury ...
Neural correlates of mathematical problem solving.
Lin, Chun-Ling; Jung, Melody; Wu, Ying Choon; She, Hsiao-Ching; Jung, Tzyy-Ping
2015-03-01
This study explores electroencephalography (EEG) brain dynamics associated with mathematical problem solving. EEG and solution latencies (SLs) were recorded as 11 neurologically healthy volunteers worked on intellectually challenging math puzzles that involved combining four single-digit numbers through basic arithmetic operators (addition, subtraction, division, multiplication) to create an arithmetic expression equaling 24. Estimates of EEG spectral power were computed in three frequency bands - θ (4-7 Hz), α (8-13 Hz) and β (14-30 Hz) - over a widely distributed montage of scalp electrode sites. The magnitude of power estimates was found to change in a linear fashion with SLs - that is, relative to a base of power spectrum, theta power increased with longer SLs, while alpha and beta power tended to decrease. Further, the topographic distribution of spectral fluctuations was characterized by more pronounced asymmetries along the left-right and anterior-posterior axes for solutions that involved a longer search phase. These findings reveal for the first time the topography and dynamics of EEG spectral activities important for sustained solution search during arithmetical problem solving.
Zotev, Vadim; Yuan, Han; Misaki, Masaya; Phillips, Raquel; Young, Kymberly D.; Feldner, Matthew T.; Bodurka, Jerzy
2016-01-01
Real-time fMRI neurofeedback (rtfMRI-nf) is an emerging approach for studies and novel treatments of major depressive disorder (MDD). EEG performed simultaneously with an rtfMRI-nf procedure allows an independent evaluation of rtfMRI-nf brain modulation effects. Frontal EEG asymmetry in the alpha band is a widely used measure of emotion and motivation that shows profound changes in depression. However, it has never been directly related to simultaneously acquired fMRI data. We report the first study investigating electrophysiological correlates of the rtfMRI-nf procedure, by combining the rtfMRI-nf with simultaneous and passive EEG recordings. In this pilot study, MDD patients in the experimental group (n = 13) learned to upregulate BOLD activity of the left amygdala using an rtfMRI-nf during a happy emotion induction task. MDD patients in the control group (n = 11) were provided with a sham rtfMRI-nf. Correlations between frontal EEG asymmetry in the upper alpha band and BOLD activity across the brain were examined. Average individual changes in frontal EEG asymmetry during the rtfMRI-nf task for the experimental group showed a significant positive correlation with the MDD patients' depression severity ratings, consistent with an inverse correlation between the depression severity and frontal EEG asymmetry at rest. The average asymmetry changes also significantly correlated with the amygdala BOLD laterality. Temporal correlations between frontal EEG asymmetry and BOLD activity were significantly enhanced, during the rtfMRI-nf task, for the amygdala and many regions associated with emotion regulation. Our findings demonstrate an important link between amygdala BOLD activity and frontal EEG asymmetry during emotion regulation. Our EEG asymmetry results indicate that the rtfMRI-nf training targeting the amygdala is beneficial to MDD patients. They further suggest that EEG-nf based on frontal EEG asymmetry in the alpha band would be compatible with the amygdala-based rtfMRI-nf. Combination of the two could enhance emotion regulation training and benefit MDD patients. PMID:26958462
Source localization of temporal lobe epilepsy using PCA-LORETA analysis on ictal EEG recordings.
Stern, Yaki; Neufeld, Miriam Y; Kipervasser, Svetlana; Zilberstein, Amir; Fried, Itzhak; Teicher, Mina; Adi-Japha, Esther
2009-04-01
Localizing the source of an epileptic seizure using noninvasive EEG suffers from inaccuracies produced by other generators not related to the epileptic source. The authors isolated the ictal epileptic activity, and applied a source localization algorithm to identify its estimated location. Ten ictal EEG scalp recordings from five different patients were analyzed. The patients were known to have temporal lobe epilepsy with a single epileptic focus that had a concordant MRI lesion. The patients had become seizure-free following partial temporal lobectomy. A midinterval (approximately 5 seconds) period of ictal activity was used for Principal Component Analysis starting at ictal onset. The level of epileptic activity at each electrode (i.e., the eigenvector of the component that manifest epileptic characteristic), was used as an input for low-resolution tomography analysis for EEG inverse solution (Zilberstain et al., 2004). The algorithm accurately and robustly identified the epileptic focus in these patients. Principal component analysis and source localization methods can be used in the future to monitor the progression of an epileptic seizure and its expansion to other areas.
Scale-specific effects: A report on multiscale analysis of acupunctured EEG in entropy and power
NASA Astrophysics Data System (ADS)
Song, Zhenxi; Deng, Bin; Wei, Xile; Cai, Lihui; Yu, Haitao; Wang, Jiang; Wang, Ruofan; Chen, Yingyuan
2018-02-01
Investigating acupuncture effects contributes to improving clinical application and understanding neuronal dynamics under external stimulation. In this report, we recorded electroencephalography (EEG) signals evoked by acupuncture at ST36 acupoint with three stimulus frequencies of 50, 100 and 200 times per minutes, and selected non-acupuncture EEGs as the control group. Multiscale analyses were introduced to investigate the possible acupuncture effects on complexity and power in multiscale level. Using multiscale weighted-permutation entropy, we found the significant effects on increased complexity degree in EEG signals induced by acupuncture. The comparison of three stimulation manipulations showed that 100 times/min generated most obvious effects, and affected most cortical regions. By estimating average power spectral density, we found decreased power induced by acupuncture. The joint distribution of entropy and power indicated an inverse correlation, and this relationship was weakened by acupuncture effects, especially under the manipulation of 100 times/min frequency. Above findings are more evident and stable in large scales than small scales, which suggests that multiscale analysis allows evaluating significant effects in specific scale and enables to probe the inherent characteristics underlying physiological signals.
Effects of Marijuana on Ictal and Interictal EEG Activities in Idiopathic Generalized Epilepsy.
Sivakumar, Sanjeev; Zutshi, Deepti; Seraji-Bozorgzad, Navid; Shah, Aashit K
2017-01-01
Marijuana-based treatment for refractory epilepsy shows promise in surveys, case series, and clinical trials. However, literature on their EEG effects is sparse. Our objective is to analyze the effect of marijuana on EEG in a 24-year-old patient with idiopathic generalized epilepsy treated with cannabis. We blindly reviewed 3 long-term EEGs-a 24-hour study while only on antiepileptic drugs, a 72-hour EEG with Cannabis indica smoked on days 1 and 3 in addition to antiepileptic drugs, and a 48-hour EEG with combination C indica/sativa smoked on day 1 plus antiepileptic drugs. Generalized spike-wave discharges and diffuse paroxysmal fast activity were categorized as interictal and ictal, based on duration of less than 10 seconds or greater, respectively. Data from three studies concatenated into contiguous time series, with usage of marijuana modeled as time-dependent discrete variable while interictal and ictal events constituted dependent variables. Analysis of variance as initial test for significance followed by time series analysis using Generalized Autoregressive Conditional Heteroscedasticity model was performed. Statistical significance for lower interictal events (analysis of variance P = 0.001) was seen during C indica use, but not for C indica/sativa mixture (P = 0.629) or ictal events (P = 0.087). However, time series analysis revealed a significant inverse correlation between marijuana use, with interictal (P < 0.0004) and ictal (P = 0.002) event rates. Using a novel approach to EEG data, we demonstrate a decrease in interictal and ictal electrographic events during marijuana use. Larger samples of patients and EEG, with standardized cannabinoid formulation and dosing, are needed to validate our findings.
2013-01-01
Background Matching pursuit algorithm (MP), especially with recent multivariate extensions, offers unique advantages in analysis of EEG and MEG. Methods We propose a novel construction of an optimal Gabor dictionary, based upon the metrics introduced in this paper. We implement this construction in a freely available software for MP decomposition of multivariate time series, with a user friendly interface via the Svarog package (Signal Viewer, Analyzer and Recorder On GPL, http://braintech.pl/svarog), and provide a hands-on introduction to its application to EEG. Finally, we describe numerical and mathematical optimizations used in this implementation. Results Optimal Gabor dictionaries, based on the metric introduced in this paper, for the first time allowed for a priori assessment of maximum one-step error of the MP algorithm. Variants of multivariate MP, implemented in the accompanying software, are organized according to the mathematical properties of the algorithms, relevant in the light of EEG/MEG analysis. Some of these variants have been successfully applied to both multichannel and multitrial EEG and MEG in previous studies, improving preprocessing for EEG/MEG inverse solutions and parameterization of evoked potentials in single trials; we mention also ongoing work and possible novel applications. Conclusions Mathematical results presented in this paper improve our understanding of the basics of the MP algorithm. Simple introduction of its properties and advantages, together with the accompanying stable and user-friendly Open Source software package, pave the way for a widespread and reproducible analysis of multivariate EEG and MEG time series and novel applications, while retaining a high degree of compatibility with the traditional, visual analysis of EEG. PMID:24059247
Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua
2010-08-01
In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.
Sparse EEG/MEG source estimation via a group lasso
Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor
2017-01-01
Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
Hindriks, Rikkert; Schmiedt, Joscha; Arsiwalla, Xerxes D; Peter, Alina; Verschure, Paul F M J; Fries, Pascal; Schmid, Michael C; Deco, Gustavo
2017-01-01
Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires "inverting" Poisson's equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to "invert" a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task.
Schmiedt, Joscha; Arsiwalla, Xerxes D.; Peter, Alina; Verschure, Paul F. M. J.; Fries, Pascal; Schmid, Michael C.; Deco, Gustavo
2017-01-01
Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires “inverting” Poisson’s equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to “invert” a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task. PMID:29253006
Single-trial EEG-informed fMRI analysis of emotional decision problems in hot executive function.
Guo, Qian; Zhou, Tiantong; Li, Wenjie; Dong, Li; Wang, Suhong; Zou, Ling
2017-07-01
Executive function refers to conscious control in psychological process which relates to thinking and action. Emotional decision is a part of hot executive function and contains emotion and logic elements. As a kind of important social adaptation ability, more and more attention has been paid in recent years. Gambling task can be well performed in the study of emotional decision. As fMRI researches focused on gambling task show not completely consistent brain activation regions, this study adopted EEG-fMRI fusion technology to reveal brain neural activity related with feedback stimuli. In this study, an EEG-informed fMRI analysis was applied to process simultaneous EEG-fMRI data. First, relative power-spectrum analysis and K-means clustering method were performed separately to extract EEG-fMRI features. Then, Generalized linear models were structured using fMRI data and using different EEG features as regressors. The results showed that in the win versus loss stimuli, the activated regions almost covered the caudate, the ventral striatum (VS), the orbital frontal cortex (OFC), and the cingulate. Wide activation areas associated with reward and punishment were revealed by the EEG-fMRI integration analysis than the conventional fMRI results, such as the posterior cingulate and the OFC. The VS and the medial prefrontal cortex (mPFC) were found when EEG power features were performed as regressors of GLM compared with results entering the amplitudes of feedback-related negativity (FRN) as regressors. Furthermore, the brain region activation intensity was the strongest when theta-band power was used as a regressor compared with the other two fusion results. The EEG-based fMRI analysis can more accurately depict the whole-brain activation map and analyze emotional decision problems.
Brain Dynamics: Methodological Issues and Applications in Psychiatric and Neurologic Diseases
NASA Astrophysics Data System (ADS)
Pezard, Laurent
The human brain is a complex dynamical system generating the EEG signal. Numerical methods developed to study complex physical dynamics have been used to characterize EEG since the mid-eighties. This endeavor raised several issues related to the specificity of EEG. Firstly, theoretical and methodological studies should address the major differences between the dynamics of the human brain and physical systems. Secondly, this approach of EEG signal should prove to be relevant for dealing with physiological or clinical problems. A set of studies performed in our group is presented here within the context of these two problematic aspects. After the discussion of methodological drawbacks, we review numerical simulations related to the high dimension and spatial extension of brain dynamics. Experimental studies in neurologic and psychiatric disease are then presented. We conclude that if it is now clear that brain dynamics changes in relation with clinical situations, methodological problems remain largely unsolved.
Wavelet-based localization of oscillatory sources from magnetoencephalography data.
Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C
2014-08-01
Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.
Multi-modal Patient Cohort Identification from EEG Report and Signal Data
Goodwin, Travis R.; Harabagiu, Sanda M.
2016-01-01
Clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. An EEG records the electrical activity along the scalp and measures spontaneous electrical activity of the brain. Because the EEG signal is complex, its interpretation is known to produce moderate inter-observer agreement among neurologists. This problem can be addressed by providing clinical experts with the ability to automatically retrieve similar EEG signals and EEG reports through a patient cohort retrieval system operating on a vast archive of EEG data. In this paper, we present a multi-modal EEG patient cohort retrieval system called MERCuRY which leverages the heterogeneous nature of EEG data by processing both the clinical narratives from EEG reports as well as the raw electrode potentials derived from the recorded EEG signal data. At the core of MERCuRY is a novel multimodal clinical indexing scheme which relies on EEG data representations obtained through deep learning. The index is used by two clinical relevance models that we have generated for identifying patient cohorts satisfying the inclusion and exclusion criteria expressed in natural language queries. Evaluations of the MERCuRY system measured the relevance of the patient cohorts, obtaining MAP scores of 69.87% and a NDCG of 83.21%. PMID:28269938
The Relation Between Trait Anger and Impulse Control in Forensic Psychiatric Patients: An EEG Study.
Lievaart, Marien; van der Veen, Frederik M; Huijding, Jorg; Hovens, Johannes E; Franken, Ingmar H A
2018-06-01
Inhibitory control is considered to be one of the key factors in explaining individual differences in trait anger and reactive aggression. Yet, only a few studies have assessed electroencephalographic (EEG) activity with respect to response inhibition in high trait anger individuals. The main goal of this study was therefore to investigate whether individual differences in trait anger in forensic psychiatric patients are associated with individual differences in anger-primed inhibitory control using behavioral and electrophysiological measures of response inhibition. Thirty-eight forensic psychiatric patients who had a medium to high risk of recidivism of violent and/or non-violent behaviors performed an affective Go/NoGo task while EEG was recorded. On the behavioral level, we found higher scores on trait anger to be accompanied by lower accuracy on NoGo trials, especially when anger was primed. With respect to the physiological data we found, as expected, a significant inverse relation between trait anger and the error related negativity amplitudes. Contrary to expectation, trait anger was not related to the stimulus-locked event related potentials (i.e., N2/P3). The results of this study support the notion that in a forensic population trait anger is inversely related to impulse control, particularly in hostile contexts. Moreover, our data suggest that higher scores on trait anger are associated with deficits in automatic error-processing which may contribute the continuation of impulsive angry behaviors despite their negative consequences.
Human exposure to power frequency magnetic fields up to 7.6 mT: An integrated EEG/fMRI study.
Modolo, Julien; Thomas, Alex W; Legros, Alexandre
2017-09-01
We assessed the effects of power-line frequency (60 Hz in North America) magnetic fields (MF) in humans using simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). Twenty-five participants were enrolled in a pseudo-double-blind experiment involving "real" or "sham" exposure to sinusoidal 60 Hz MF exposures delivered using the gradient coil of an MRI scanner following two conditions: (i) 10 s exposures at 3 mT (10 repetitions); (ii) 2 s exposures at 7.6 mT (100 repetitions). Occipital EEG spectral power was computed in the alpha range (8-12 Hz, reportedly the most sensitive to MF exposure in the literature) with/without exposure. Brain functional activation was studied using fMRI blood oxygen level-dependent (BOLD, inversely correlated with EEG alpha power) maps. No significant effects were detected on occipital EEG alpha power during or post-exposure for any exposure condition. Consistent with EEG results, no effects were observed on fMRI BOLD maps in any brain region. Our results suggest that acute exposure (2-10 s) to 60 Hz MF from 3 to 7.6 mT (30,000 to 76,000 times higher than average public exposure levels for 60 Hz MF) does not induce detectable changes in EEG or BOLD signals. Combined with previous findings in which effects were observed on the BOLD signal after 1 h exposure to 3 mT, 60 Hz MF, this suggests that MF exposure in the low mT range (<10 mT) might require prolonged durations of exposure to induce detectable effects. Bioelectromagnetics. 38:425-435, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
2013-01-01
Background The dimensional approach to autism spectrum disorder (ASD) considers ASD as the extreme of a dimension traversing through the entire population. We explored the potential utility of electroencephalography (EEG) functional connectivity as a biomarker. We hypothesized that individual differences in autistic traits of typical subjects would involve a long-range connectivity diminution within the delta band. Methods Resting-state EEG functional connectivity was measured for 74 neurotypical subjects. All participants also provided a questionnaire (Social Responsiveness Scale, SRS) that was completed by an informant who knows the participant in social settings. We conducted multivariate regression between the SRS score and functional connectivity in all EEG frequency bands. We explored modulations of network graph metrics characterizing the optimality of a network using the SRS score. Results Our results show a decay in functional connectivity mainly within the delta and theta bands (the lower part of the EEG spectrum) associated with an increasing number of autistic traits. When inspecting the impact of autistic traits on the global organization of the functional network, we found that the optimal properties of the network are inversely related to the number of autistic traits, suggesting that the autistic dimension, throughout the entire population, modulates the efficiency of functional brain networks. Conclusions EEG functional connectivity at low frequencies and its associated network properties may be associated with some autistic traits in the general population. PMID:23806204
Psychogenic seizures and frontal disconnection: EEG synchronisation study.
Knyazeva, Maria G; Jalili, Mahdi; Frackowiak, Richard S; Rossetti, Andrea O
2011-05-01
Psychogenic non-epileptic seizures (PNES) are paroxysmal events that, in contrast to epileptic seizures, are related to psychological causes without the presence of epileptiform EEG changes. Recent models suggest a multifactorial basis for PNES. A potentially paramount, but currently poorly understood factor is the interplay between psychiatric features and a specific vulnerability of the brain leading to a clinical picture that resembles epilepsy. Hypothesising that functional cerebral network abnormalities may predispose to the clinical phenotype, the authors undertook a characterisation of the functional connectivity in PNES patients. The authors analysed the whole-head surface topography of multivariate phase synchronisation (MPS) in interictal high-density EEG of 13 PNES patients as compared with 13 age- and sex-matched controls. MPS mapping reduces the wealth of dynamic data obtained from high-density EEG to easily readable synchronisation maps, which provide an unbiased overview of any changes in functional connectivity associated with distributed cortical abnormalities. The authors computed MPS maps for both Laplacian and common-average-reference EEGs. In a between-group comparison, only patchy, non-uniform changes in MPS survived conservative statistical testing. However, against the background of these unimpressive group results, the authors found widespread inverse correlations between individual PNES frequency and MPS within the prefrontal and parietal cortices. PNES appears to be associated with decreased prefrontal and parietal synchronisation, possibly reflecting dysfunction of networks within these regions.
As above, so below? Towards understanding inverse models in BCI
NASA Astrophysics Data System (ADS)
Lindgren, Jussi T.
2018-02-01
Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.
EEG feature selection method based on decision tree.
Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun
2015-01-01
This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
Use of parallel computing for analyzing big data in EEG studies of ambiguous perception
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.
2018-02-01
Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.
Potegal, Michael; Drewel, Elena H; MacDonald, John T
2018-01-01
We explored associations between EEG pathophysiology and emotional/behavioral (E/B) problems of children with two types of epilepsy using standard parent questionnaires and two new indicators: tantrums recorded by parents at home and brief, emotion-eliciting situations in the laboratory. Children with Benign Rolandic epilepsy (BRE, N = 6) reportedly had shorter, more angry tantrums from which they recovered quickly. Children with Complex Partial Seizures (CPS, N = 13) had longer, sadder tantrums often followed by bad moods. More generally, BRE correlated with anger and aggression; CPS with sadness and withdrawal. Scores of a composite group of siblings ( N = 11) were generally intermediate between the BRE and CPS groups. Across all children, high voltage theta and/or interictal epileptiform discharges (IEDs) correlated with negative emotional reactions. Such EEG abnormalities in left hemisphere correlated with greater social fear, right hemisphere EEG abnormalities with greater anger. Right hemisphere localization in CPS was also associated with parent-reported problems at home. If epilepsy alters neural circuitry thereby increasing negative emotions, additional assessment of anti-epileptic drug treatment of epilepsy-related E/B problems would be warranted.
Measuring the face-sensitive N170 with a gaming EEG system: A validation study.
de Lissa, Peter; Sörensen, Sidsel; Badcock, Nicholas; Thie, Johnson; McArthur, Genevieve
2015-09-30
The N170 is a "face-sensitive" event-related potential (ERP) that occurs at around 170ms over occipito-temporal brain regions. The N170's potential to provide insight into the neural processing of faces in certain populations (e.g., children and adults with cognitive impairments) is limited by its measurement in scientific laboratories that can appear threatening to some people. The advent of cheap, easy-to-use portable gaming EEG systems provides an opportunity to record EEG in new contexts and populations. This study tested the validity of the face-sensitive N170 ERP measured with an adapted commercial EEG system (the Emotiv EPOC) that is used at home by gamers. The N170 recorded through both the gaming EEG system and the research EEG system exhibited face-sensitivity, with larger mean amplitudes in response to the face stimuli than the non-face stimuli, and a delayed N170 peak in response to face inversion. The EPOC system produced very similar N170 ERPs to a research-grade Neuroscan system, and was capable of recording face-sensitivity in the N170, validating its use as research tool in this arena. This opens new possibilities for measuring the face-sensitive N170 ERP in people who cannot travel to a traditional ERP laboratory (e.g., elderly people in care), who cannot tolerate laboratory conditions (e.g., people with autism), or who need to be tested in situ for practical or experimental reasons (e.g., children in schools). Copyright © 2015 Elsevier B.V. All rights reserved.
Electroencephalographic imaging of higher brain function
NASA Technical Reports Server (NTRS)
Gevins, A.; Smith, M. E.; McEvoy, L. K.; Leong, H.; Le, J.
1999-01-01
High temporal resolution is necessary to resolve the rapidly changing patterns of brain activity that underlie mental function. Electroencephalography (EEG) provides temporal resolution in the millisecond range. However, traditional EEG technology and practice provide insufficient spatial detail to identify relationships between brain electrical events and structures and functions visualized by magnetic resonance imaging or positron emission tomography. Recent advances help to overcome this problem by recording EEGs from more electrodes, by registering EEG data with anatomical images, and by correcting the distortion caused by volume conduction of EEG signals through the skull and scalp. In addition, statistical measurements of sub-second interdependences between EEG time-series recorded from different locations can help to generate hypotheses about the instantaneous functional networks that form between different cortical regions during perception, thought and action. Example applications are presented from studies of language, attention and working memory. Along with its unique ability to monitor brain function as people perform everyday activities in the real world, these advances make modern EEG an invaluable complement to other functional neuroimaging modalities.
Vol'f, N V
1998-01-01
Sexual differences in the hemispheric organization of verbal functions were shown in experiments with dichotic presentation of word lists, in Sternbeg's memory scanning task, in studies of EEG power and coherence while memorizing the lists of dichotically presented words. The efficiency of word retrieval and speed of memory scanning for stimuli presented to the right hemisphere were higher in women. EEG activation while memorizing words was more pronounced in men. There were negative correlations between left ear word retrieval and EEG activation in women. The author's findings showed sexual dimorphism in functional connections within the cortical regions of the brain while memorizing verbal information. The changes in coherence were in positive correlation with the efficiency of word retrieval in women and in inverse correlation in men, and this was evidence for the different physiological significance of changes in coherence in men and women. This suggests that the physiological significance of changes in coherence differs in men and women.
Topological properties of flat electroencephalography's state space
NASA Astrophysics Data System (ADS)
Ken, Tan Lit; Ahmad, Tahir bin; Mohd, Mohd Sham bin; Ngien, Su Kong; Suwa, Tohru; Meng, Ong Sie
2016-02-01
Neuroinverse problem are often associated with complex neuronal activity. It involves locating problematic cell which is highly challenging. While epileptic foci localization is possible with the aid of EEG signals, it relies greatly on the ability to extract hidden information or pattern within EEG signals. Flat EEG being an enhancement of EEG is a way of viewing electroencephalograph on the real plane. In the perspective of dynamical systems, Flat EEG is equivalent to epileptic seizure hence, making it a great platform to study epileptic seizure. Throughout the years, various mathematical tools have been applied on Flat EEG to extract hidden information that is hardly noticeable by traditional visual inspection. While these tools have given worthy results, the journey towards understanding seizure process completely is yet to be succeeded. Since the underlying structure of Flat EEG is dynamic and is deemed to contain wealthy information regarding brainstorm, it would certainly be appealing to explore in depth its structures. To better understand the complex seizure process, this paper studies the event of epileptic seizure via Flat EEG in a more general framework by means of topology, particularly, on the state space where the event of Flat EEG lies.
Saa, Jaime F Delgado; Çetin, Müjdat
2012-04-01
We consider the problem of classification of imaginary motor tasks from electroencephalography (EEG) data for brain-computer interfaces (BCIs) and propose a new approach based on hidden conditional random fields (HCRFs). HCRFs are discriminative graphical models that are attractive for this problem because they (1) exploit the temporal structure of EEG; (2) include latent variables that can be used to model different brain states in the signal; and (3) involve learned statistical models matched to the classification task, avoiding some of the limitations of generative models. Our approach involves spatial filtering of the EEG signals and estimation of power spectra based on autoregressive modeling of temporal segments of the EEG signals. Given this time-frequency representation, we select certain frequency bands that are known to be associated with execution of motor tasks. These selected features constitute the data that are fed to the HCRF, parameters of which are learned from training data. Inference algorithms on the HCRFs are used for the classification of motor tasks. We experimentally compare this approach to the best performing methods in BCI competition IV as well as a number of more recent methods and observe that our proposed method yields better classification accuracy.
EEG seizure detection and prediction algorithms: a survey
NASA Astrophysics Data System (ADS)
Alotaiby, Turkey N.; Alshebeili, Saleh A.; Alshawi, Tariq; Ahmad, Ishtiaq; Abd El-Samie, Fathi E.
2014-12-01
Epilepsy patients experience challenges in daily life due to precautions they have to take in order to cope with this condition. When a seizure occurs, it might cause injuries or endanger the life of the patients or others, especially when they are using heavy machinery, e.g., deriving cars. Studies of epilepsy often rely on electroencephalogram (EEG) signals in order to analyze the behavior of the brain during seizures. Locating the seizure period in EEG recordings manually is difficult and time consuming; one often needs to skim through tens or even hundreds of hours of EEG recordings. Therefore, automatic detection of such an activity is of great importance. Another potential usage of EEG signal analysis is in the prediction of epileptic activities before they occur, as this will enable the patients (and caregivers) to take appropriate precautions. In this paper, we first present an overview of seizure detection and prediction problem and provide insights on the challenges in this area. Second, we cover some of the state-of-the-art seizure detection and prediction algorithms and provide comparison between these algorithms. Finally, we conclude with future research directions and open problems in this topic.
Measurement and modification of the EEG and related behavior
NASA Technical Reports Server (NTRS)
Sterman, M. B.
1991-01-01
Electrophysiological changes in the sensorimotor pathways were found to accompany the effect of rhythmic EEG patterns in the sensorimotor cortex. Additionally, several striking behavioral changes were seen, including in particular an enhancement of sleep and an elevation of seizure threshold to epileptogenic agents. This raised the possibility that human seizure disorders might be influenced therapeutically by similar training. Our objective in human EEG feedback training became not only the facilitation of normal rhythmic patterns, but also the suppression of abnormal activity, thus requiring complex contingencies directed to the normalization of the sensorimotor EEG. To achieve this, a multicomponent frequency analysis was developed to extract and separate normal and abnormal elements of the EEG signal. Each of these elements was transduced to a specific component of a visual display system, and these were combined through logic circuits to present the subject with a symbolic display. Variable criteria provided for the gradual shaping of EEG elements towards the desired normal pattern. Some 50-70% of patients with poorly controlled seizure disorders experienced therapeutic benefits from this approach in our laboratory, and subsequently in many others. A more recent application of this approach to the modification of human brain function in our lab has been directed to the dichotomous problems of task overload and underload in the contemporary aviation environment. At least 70% of all aviation accidents have been attributed to the impact of these kinds of problems on crew performance. The use of EEG in this context has required many technical innovations and the application of the latest advances in EEG signal analysis. Our first goal has been the identification of relevant EEG characteristics. Additionally, we have developed a portable recording and analysis system for application in this context. Findings from laboratory and in-flight studies suggest that we will be able to detect appropriate changes in brain function, and feed this information to on-board computers for modification of mission requirements and/or crew status.
A random forest model based classification scheme for neonatal amplitude-integrated EEG.
Chen, Weiting; Wang, Yu; Cao, Guitao; Chen, Guoqiang; Gu, Qiufang
2014-01-01
Modern medical advances have greatly increased the survival rate of infants, while they remain in the higher risk group for neurological problems later in life. For the infants with encephalopathy or seizures, identification of the extent of brain injury is clinically challenging. Continuous amplitude-integrated electroencephalography (aEEG) monitoring offers a possibility to directly monitor the brain functional state of the newborns over hours, and has seen an increasing application in neonatal intensive care units (NICUs). This paper presents a novel combined feature set of aEEG and applies random forest (RF) method to classify aEEG tracings. To that end, a series of experiments were conducted on 282 aEEG tracing cases (209 normal and 73 abnormal ones). Basic features, statistic features and segmentation features were extracted from both the tracing as a whole and the segmented recordings, and then form a combined feature set. All the features were sent to a classifier afterwards. The significance of feature, the data segmentation, the optimization of RF parameters, and the problem of imbalanced datasets were examined through experiments. Experiments were also done to evaluate the performance of RF on aEEG signal classifying, compared with several other widely used classifiers including SVM-Linear, SVM-RBF, ANN, Decision Tree (DT), Logistic Regression(LR), ML, and LDA. The combined feature set can better characterize aEEG signals, compared with basic features, statistic features and segmentation features respectively. With the combined feature set, the proposed RF-based aEEG classification system achieved a correct rate of 92.52% and a high F1-score of 95.26%. Among all of the seven classifiers examined in our work, the RF method got the highest correct rate, sensitivity, specificity, and F1-score, which means that RF outperforms all of the other classifiers considered here. The results show that the proposed RF-based aEEG classification system with the combined feature set is efficient and helpful to better detect the brain disorders in newborns.
Automatic Seizure Detection in Rats Using Laplacian EEG and Verification with Human Seizure Signals
Feltane, Amal; Boudreaux-Bartels, G. Faye; Besio, Walter
2012-01-01
Automated detection of seizures is still a challenging problem. This study presents an approach to detect seizure segments in Laplacian electroencephalography (tEEG) recorded from rats using the tripolar concentric ring electrode (TCRE) configuration. Three features, namely, median absolute deviation, approximate entropy, and maximum singular value were calculated and used as inputs into two different classifiers: support vector machines and adaptive boosting. The relative performance of the extracted features on TCRE tEEG was examined. Results are obtained with an overall accuracy between 84.81 and 96.51%. In addition to using TCRE tEEG data, the seizure detection algorithm was also applied to the recorded EEG signals from Andrzejak et al. database to show the efficiency of the proposed method for seizure detection. PMID:23073989
Siddiqui, Mohd Maroof; Srivastava, Geetika; Saeed, Syed Hasan
2016-01-01
Insomnia is a sleep disorder in which the subject encounters problems in sleeping. The aim of this study is to identify insomnia events from normal or effected person using time frequency analysis of PSD approach applied on EEG signals using channel ROC-LOC. In this research article, attributes and waveform of EEG signals of Human being are examined. The aim of this study is to draw the result in the form of signal spectral analysis of the changes in the domain of different stages of sleep. The analysis and calculation is performed in all stages of sleep of PSD of each EEG segment. Results indicate the possibility of recognizing insomnia events based on delta, theta, alpha and beta segments of EEG signals.
Wireless multichannel electroencephalography in the newborn.
Ibrahim, Z H; Chari, G; Abdel Baki, S; Bronshtein, V; Kim, M R; Weedon, J; Cracco, J; Aranda, J V
2016-01-01
First, to determine the feasibility of an ultra-compact wireless device (microEEG) to obtain multichannel electroencephalographic (EEG) recording in the Neonatal Intensive Care Unit (NICU). Second, to identify problem areas in order to improve wireless EEG performance. 28 subjects (gestational age 24-30 weeks, postnatal age <30 days) were recruited at 2 sites as part of an ongoing study of neonatal apnea and wireless EEG. Infants underwent 8-9 hour EEG recordings every 2-4 weeks using an electrode cap (ANT-Neuro) connected to the wireless EEG device (Bio-Signal Group). A 23 electrode configuration was used incorporating the International 10-20 System. The device transmitted recordings wirelessly to a laptop computer for bedside assessment. The recordings were assessed by a pediatric neurophysiologist for interpretability. A total of 84 EEGs were recorded from 28 neonates. 61 EEG studies were obtained in infants prior to 35 weeks corrected gestational age (CGA). NICU staff placed all electrode caps and initiated all recordings. Of these recordings 6 (10%) were uninterpretable due to artifacts and one study could not be accessed. The remaining 54 (89%) EEG recordings were acceptable for clinical review and interpretation by a pediatric neurophysiologist. Of the recordings obtained at 35 weeks corrected gestational age or later only 11 out of 23 (48%) were interpretable. Wireless EEG devices can provide practical, continuous, multichannel EEG monitoring in preterm neonates. Their small size and ease of use could overcome obstacles associated with EEG recording and interpretation in the NICU.
Some sequential, distribution-free pattern classification procedures with applications
NASA Technical Reports Server (NTRS)
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
An Inflatable and Wearable Wireless System for Making 32-Channel Electroencephalogram Measurements.
Yu, Yi-Hsin; Lu, Shao-Wei; Chuang, Chun-Hsiang; King, Jung-Tai; Chang, Che-Lun; Chen, Shi-An; Chen, Sheng-Fu; Lin, Chin-Teng
2016-07-01
Potable electroencephalography (EEG) devices have become critical for important research. They have various applications, such as in brain-computer interfaces (BCI). Numerous recent investigations have focused on the development of dry sensors, but few concern the simultaneous attachment of high-density dry sensors to different regions of the scalp to receive qualified EEG signals from hairy sites. An inflatable and wearable wireless 32-channel EEG device was designed, prototyped, and experimentally validated for making EEG signal measurements; it incorporates spring-loaded dry sensors and a novel gasbag design to solve the problem of interference by hair. The cap is ventilated and incorporates a circuit board and battery with a high-tolerance wireless (Bluetooth) protocol and low power consumption characteristics. The proposed system provides a 500/250 Hz sampling rate, and 24 bit EEG data to meet the BCI system data requirement. Experimental results prove that the proposed EEG system is effective in measuring audio event-related potential, measuring visual event-related potential, and rapid serial visual presentation. Results of this work demonstrate that the proposed EEG cap system performs well in making EEG measurements and is feasible for practical applications.
High-resolution EEG techniques for brain-computer interface applications.
Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Astolfi, Laura; De Vico Fallani, Fabrizio; Tocci, Andrea; Bianchi, Luigi; Marciani, Maria Grazia; Gao, Shangkai; Millan, Jose; Babiloni, Fabio
2008-01-15
High-resolution electroencephalographic (HREEG) techniques allow estimation of cortical activity based on non-invasive scalp potential measurements, using appropriate models of volume conduction and of neuroelectrical sources. In this study we propose an application of this body of technologies, originally developed to obtain functional images of the brain's electrical activity, in the context of brain-computer interfaces (BCI). Our working hypothesis predicted that, since HREEG pre-processing removes spatial correlation introduced by current conduction in the head structures, by providing the BCI with waveforms that are mostly due to the unmixed activity of a small cortical region, a more reliable classification would be obtained, at least when the activity to detect has a limited generator, which is the case in motor related tasks. HREEG techniques employed in this study rely on (i) individual head models derived from anatomical magnetic resonance images, (ii) distributed source model, composed of a layer of current dipoles, geometrically constrained to the cortical mantle, (iii) depth-weighted minimum L(2)-norm constraint and Tikhonov regularization for linear inverse problem solution and (iv) estimation of electrical activity in cortical regions of interest corresponding to relevant Brodmann areas. Six subjects were trained to learn self modulation of sensorimotor EEG rhythms, related to the imagination of limb movements. Off-line EEG data was used to estimate waveforms of cortical activity (cortical current density, CCD) on selected regions of interest. CCD waveforms were fed into the BCI computational pipeline as an alternative to raw EEG signals; spectral features are evaluated through statistical tests (r(2) analysis), to quantify their reliability for BCI control. These results are compared, within subjects, to analogous results obtained without HREEG techniques. The processing procedure was designed in such a way that computations could be split into a setup phase (which includes most of the computational burden) and the actual EEG processing phase, which was limited to a single matrix multiplication. This separation allowed to make the procedure suitable for on-line utilization, and a pilot experiment was performed. Results show that lateralization of electrical activity, which is expected to be contralateral to the imagined movement, is more evident on the estimated CCDs than in the scalp potentials. CCDs produce a pattern of relevant spectral features that is more spatially focused, and has a higher statistical significance (EEG: 0.20+/-0.114 S.D.; CCD: 0.55+/-0.16 S.D.; p=10(-5)). A pilot experiment showed that a trained subject could utilize voluntary modulation of estimated CCDs for accurate (eight targets) on-line control of a cursor. This study showed that it is practically feasible to utilize HREEG techniques for on-line operation of a BCI system; off-line analysis suggests that accuracy of BCI control is enhanced by the proposed method.
Statistical Feature Extraction for Artifact Removal from Concurrent fMRI-EEG Recordings
Liu, Zhongming; de Zwart, Jacco A.; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H.
2011-01-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphases are directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use a channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable by the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. PMID:22036675
Statistical feature extraction for artifact removal from concurrent fMRI-EEG recordings.
Liu, Zhongming; de Zwart, Jacco A; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H
2012-02-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphasis is directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac timing markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. Published by Elsevier Inc.
Ping-Keng Jao; Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung
2015-08-01
An emerging challenge for emotion classification using electroencephalography (EEG) is how to effectively alleviate day-to-day variability in raw data. This study employed the robust principal component analysis (RPCA) to address the problem with a posed hypothesis that background or emotion-irrelevant EEG perturbations lead to certain variability across days and somehow submerge emotion-related EEG dynamics. The empirical results of this study evidently validated our hypothesis and demonstrated the RPCA's feasibility through the analysis of a five-day dataset of 12 subjects. The RPCA allowed tackling the sparse emotion-relevant EEG dynamics from the accompanied background perturbations across days. Sequentially, leveraging the RPCA-purified EEG trials from more days appeared to improve the emotion-classification performance steadily, which was not found in the case using the raw EEG features. Therefore, incorporating the RPCA with existing emotion-aware machine-learning frameworks on a longitudinal dataset of each individual may shed light on the development of a robust affective brain-computer interface (ABCI) that can alleviate ecological inter-day variability.
Synchronizing MIDI and wireless EEG measurements during natural piano performance.
Zamm, Anna; Palmer, Caroline; Bauer, Anna-Katharina R; Bleichner, Martin G; Demos, Alexander P; Debener, Stefan
2017-07-08
Although music performance has been widely studied in the behavioural sciences, less work has addressed the underlying neural mechanisms, perhaps due to technical difficulties in acquiring high-quality neural data during tasks requiring natural motion. The advent of wireless electroencephalography (EEG) presents a solution to this problem by allowing for neural measurement with minimal motion artefacts. In the current study, we provide the first validation of a mobile wireless EEG system for capturing the neural dynamics associated with piano performance. First, we propose a novel method for synchronously recording music performance and wireless mobile EEG. Second, we provide results of several timing tests that characterize the timing accuracy of our system. Finally, we report EEG time domain and frequency domain results from N=40 pianists demonstrating that wireless EEG data capture the unique temporal signatures of musicians' performances with fine-grained precision and accuracy. Taken together, we demonstrate that mobile wireless EEG can be used to measure the neural dynamics of piano performance with minimal motion constraints. This opens many new possibilities for investigating the brain mechanisms underlying music performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Disordered high-frequency oscillation in face processing in schizophrenia patients
Liu, Miaomiao; Pei, Guangying; Peng, Yinuo; Wang, Changming; Yan, Tianyi; Wu, Jinglong
2018-01-01
Abstract Schizophrenia is a complex disorder characterized by marked social dysfunctions, but the neural mechanism underlying this deficit is unknown. To investigate whether face-specific perceptual processes are influenced in schizophrenia patients, both face detection and configural analysis were assessed in normal individuals and schizophrenia patients by recording electroencephalogram (EEG) data. Here, a face processing model was built based on the frequency oscillations, and the evoked power (theta, alpha, and beta bands) and the induced power (gamma bands) were recorded while the subjects passively viewed face and nonface images presented in upright and inverted orientations. The healthy adults showed a significant face-specific effect in the alpha, beta, and gamma bands, and an inversion effect was observed in the gamma band in the occipital lobe and right temporal lobe. Importantly, the schizophrenia patients showed face-specific deficits in the low-frequency beta and gamma bands, and the face inversion effect in the gamma band was absent from the occipital lobe. All these results revealed face-specific processing in patients due to the disorder of high-frequency EEG, providing additional evidence to enrich future studies investigating neural mechanisms and serving as a marked diagnostic basis. PMID:29419668
Govindan, R B; Kota, Srinivas; Al-Shargabi, Tareq; Massaro, An N; Chang, Taeun; du Plessis, Adre
2016-09-01
Electroencephalogram (EEG) signals are often contaminated by the electrocardiogram (ECG) interference, which affects quantitative characterization of EEG. We propose null-coherence, a frequency-based approach, to attenuate the ECG interference in EEG using simultaneously recorded ECG as a reference signal. After validating the proposed approach using numerically simulated data, we apply this approach to EEG recorded from six newborns receiving therapeutic hypothermia for neonatal encephalopathy. We compare our approach with an independent component analysis (ICA), a previously proposed approach to attenuate ECG artifacts in the EEG signal. The power spectrum and the cortico-cortical connectivity of the ECG attenuated EEG was compared against the power spectrum and the cortico-cortical connectivity of the raw EEG. The null-coherence approach attenuated the ECG contamination without leaving any residual of the ECG in the EEG. We show that the null-coherence approach performs better than ICA in attenuating the ECG contamination without enhancing cortico-cortical connectivity. Our analysis suggests that using ICA to remove ECG contamination from the EEG suffers from redistribution problems, whereas the null-coherence approach does not. We show that both the null-coherence and ICA approaches attenuate the ECG contamination. However, the EEG obtained after ICA cleaning displayed higher cortico-cortical connectivity compared with that obtained using the null-coherence approach. This suggests that null-coherence is superior to ICA in attenuating the ECG interference in EEG for cortico-cortical connectivity analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
The Success Rate of Neurology Residents in EEG Interpretation After Formal Training.
Dericioglu, Nese; Ozdemir, Pınar
2018-03-01
EEG is an important tool for neurologists in both diagnosis and classification of seizures. It is not uncommon in clinical practice to see patients who were erroneously diagnosed as epileptic. Most of the time incorrect interpretation of EEG contributes significantly to this problem. In this study, we aimed to investigate the success rate of neurology residents in EEG interpretation after formal training. Eleven neurology residents were included in the study. Duration of EEG training (3 vs 4 months) and time since completion of EEG education were determined. Residents were randomly presented 30 different slides of representative EEG screenshots. They received 1 point for each correct response. The effect of training duration and time since training were investigated statistically. Besides, we looked at the success rate of each question to see whether certain patterns were more readily recognized than others. EEG training duration ( P = .93) and time since completion of training ( P = .16) did not influence the results. The success rate of residents for correct responses was between 17% and 50%. On the other hand, the success rate for each question varied between 0% and 91%. Overall, benign variants and focal ictal onset patterns were the most difficult to recognize. On 13 occasions (6.5%) nonepileptiform patterns were thought to represent epileptiform abnormalities. After formal training, neurology residents could identify ≤50% of the EEG patterns correctly. The wide variation in success rate among residents and also between questions implies that both personal characteristics and inherent EEG features influence successful EEG interpretation.
Jech, Robert; Růzicka, Evzen; Urgosík, Dusan; Serranová, Tereza; Volfová, Markéta; Nováková, Olga; Roth, Jan; Dusek, Petr; Mecír, Petr
2006-05-01
We studied changes of the EEG spectral power induced by deep brain stimulation (DBS) of the subthalamic nucleus (STN) in patients with Parkinson's disease (PD). Also analyzed were changes of visual evoked potentials (VEP) with DBS on and off. Eleven patients with advanced PD treated with bilateral DBS STN were examined after an overnight withdrawal of L-DOPA and 2 h after switching off the neurostimulators. All underwent clinical examination followed by resting EEG and VEP recordings, a procedure repeated after DBS STN was switched on. With DBS switched on, the dominant EEG frequency increased from 9.44+/-1.3 to 9.71+/-1.3 Hz (P<0.01) while its relative spectral power dropped by 11% on average (P<0.05). Switching on the neurostimulators caused a decrease in the N70/P100 amplitude of the VEP (P<0.01), which inversely correlated with the intensity of DBS (black-and-white pattern: P<0.01; color pattern: P<0.05). Despite artifacts generated by neurostimulators, the VEP and resting EEG were suitable for the detection of effects related to DBS STN. The acceleration of dominant frequency in the alpha band may be evidence of DBS STN influence on speeding up of intracortical oscillations. The spectral power decrease, seen mainly in the fronto-central region, might reflect a desynchronization in the premotor and motor circuits, though no movement was executed. Similarly, desynchronization of the cortical activity recorded posteriorly may by responsible for the VEP amplitude decrease implying DBS STN-related influence even on the visual system. Changes in idling EEG activity observed diffusely over scalp together with involvement of the VEP suggest that the effects of DBS STN reach far beyond the motor system influencing the basic mechanisms of rhythmic cortical oscillations.
Computerized recognition of persons by EEG spectral patterns.
Stassen, H H
1980-07-01
Modified techniques of communication theory in connection with multivariate statistical procedures were applied to a sample of 82 patients for the purpose of defining EEG spectral patterns and for solving the relevant classification problems. Ten measurements per patient were made and it could be shown that a subject can be characterized and be recognized by his EEG spectral pattern with high reliability and a confidence probability of almost 90%. This result is valid not only for normal adults but also for schizophrenic patients, implying a close relationship between the EEG spectral pattern and the individual person. At the moment the nature of this relationship is not clear; in particular the supposed relationship to psychopathology could not be proved.
Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG
O'Sullivan, James A.; Power, Alan J.; Mesgarani, Nima; Rajaram, Siddharth; Foxe, John J.; Shinn-Cunningham, Barbara G.; Slaney, Malcolm; Shamma, Shihab A.; Lalor, Edmund C.
2015-01-01
How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain–computer interfaces. PMID:24429136
Abend, Nicholas S.; Dlugos, Dennis J.; Hahn, Cecil D.; Hirsch, Lawrence J.; Herman, Susan T.
2010-01-01
Background Continuous EEG monitoring (cEEG) of critically ill patients is frequently utilized to detect non-convulsive seizures (NCS) and status epilepticus (NCSE). The indications for cEEG, as well as when and how to treat NCS, remain unclear. We aimed to describe the current practice of cEEG in critically ill patients to define areas of uncertainty that could aid in designing future research. Methods We conducted an international survey of neurologists focused on cEEG utilization and NCS management. Results Three-hundred and thirty physicians completed the survey. 83% use cEEG at least once per month and 86% manage NCS at least five times per year. The use of cEEG in patients with altered mental status was common (69%), with higher use if the patient had a prior convulsion (89%) or abnormal eye movements (85%). Most respondents would continue cEEG for 24 h. If NCS or NCSE is identified, the most common anticonvulsants administered were phenytoin/fosphenytoin, lorazepam, or levetiracetam, with slightly more use of levetiracetam for NCS than NCSE. Conclusions Continuous EEG monitoring (cEEG) is commonly employed in critically ill patients to detect NCS and NCSE. However, there is substantial variability in current practice related to cEEG indications and duration and to management of NCS and NCSE. The fact that such variability exists in the management of this common clinical problem suggests that further prospective study is needed. Multiple points of uncertainty are identified that require investigation. PMID:20198513
Wireless multichannel electroencephalography in the newborn
Ibrahim, Z.H.; Chari, G.; Abdel Baki, S.; Bronshtein, V.; Kim, M.R.; Weedon, J.; Cracco, J.; Aranda, J.V.
2016-01-01
OBJECTIVES: First, to determine the feasibility of an ultra-compact wireless device (microEEG) to obtain multichannel electroencephalographic (EEG) recording in the Neonatal Intensive Care Unit (NICU). Second, to identify problem areas in order to improve wireless EEG performance. STUDY DESIGN: 28 subjects (gestational age 24–30 weeks, postnatal age <30 days) were recruited at 2 sites as part of an ongoing study of neonatal apnea and wireless EEG. Infants underwent 8-9 hour EEG recordings every 2–4 weeks using an electrode cap (ANT-Neuro) connected to the wireless EEG device (Bio-Signal Group). A 23 electrode configuration was used incorporating the International 10–20 System. The device transmitted recordings wirelessly to a laptop computer for bedside assessment. The recordings were assessed by a pediatric neurophysiologist for interpretability. RESULTS: A total of 84 EEGs were recorded from 28 neonates. 61 EEG studies were obtained in infants prior to 35 weeks corrected gestational age (CGA). NICU staff placed all electrode caps and initiated all recordings. Of these recordings 6 (10%) were uninterpretable due to artifacts and one study could not be accessed. The remaining 54 (89%) EEG recordings were acceptable for clinical review and interpretation by a pediatric neurophysiologist. Of the recordings obtained at 35 weeks corrected gestational age or later only 11 out of 23 (48%) were interpretable. CONCLUSIONS: Wireless EEG devices can provide practical, continuous, multichannel EEG monitoring in preterm neonates. Their small size and ease of use could overcome obstacles associated with EEG recording and interpretation in the NICU. PMID:28009337
Automatic removal of eye-movement and blink artifacts from EEG signals.
Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun
2010-03-01
Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
A Simulation Study on a Single-Unit Wireless EEG Sensor
Luan, Bo; Sun, Mingui
2015-01-01
Traditional EEG systems are limited when utilized in point-of-care applications due to its immobility and tedious preparation procedures. We are designing a novel device named single-unit wireless EEG sensor to solve these problems. The sensor has a size similar to a U.S. penny. Four electrodes are installed within a 20mm diameter cylinder. It can be applied to scalp in seconds to amplify, digitize and wirelessly transmit EEG. Before the design and construction of an actual sensor, in this paper, we perform a set of simulations to quantitatively study: 1) can the sensor acquire EEG reliably? 2) will the selection of sensor orientation be an important factor to influence signal strength? Our results demonstrate positive answers to these questions. Moreover, the signal sensor acquired appears to be comparable to the signal from the standard 10-20 system. These results warrant the further design and construction of a single-unit wireless EEG sensor. PMID:26207084
Piastra, Maria Carla; Nüßing, Andreas; Vorwerk, Johannes; Bornfleth, Harald; Oostenveld, Robert; Engwer, Christian; Wolters, Carsten H.
2018-01-01
In Electro- (EEG) and Magnetoencephalography (MEG), one important requirement of source reconstruction is the forward model. The continuous Galerkin finite element method (CG-FEM) has become one of the dominant approaches for solving the forward problem over the last decades. Recently, a discontinuous Galerkin FEM (DG-FEM) EEG forward approach has been proposed as an alternative to CG-FEM (Engwer et al., 2017). It was shown that DG-FEM preserves the property of conservation of charge and that it can, in certain situations such as the so-called skull leakages, be superior to the standard CG-FEM approach. In this paper, we developed, implemented, and evaluated two DG-FEM approaches for the MEG forward problem, namely a conservative and a non-conservative one. The subtraction approach was used as source model. The validation and evaluation work was done in statistical investigations in multi-layer homogeneous sphere models, where an analytic solution exists, and in a six-compartment realistically shaped head volume conductor model. In agreement with the theory, the conservative DG-FEM approach was found to be superior to the non-conservative DG-FEM implementation. This approach also showed convergence with increasing resolution of the hexahedral meshes. While in the EEG case, in presence of skull leakages, DG-FEM outperformed CG-FEM, in MEG, DG-FEM achieved similar numerical errors as the CG-FEM approach, i.e., skull leakages do not play a role for the MEG modality. In particular, for the finest mesh resolution of 1 mm sources with a distance of 1.59 mm from the brain-CSF surface, DG-FEM yielded mean topographical errors (relative difference measure, RDM%) of 1.5% and mean magnitude errors (MAG%) of 0.1% for the magnetic field. However, if the goal is a combined source analysis of EEG and MEG data, then it is highly desirable to employ the same forward model for both EEG and MEG data. Based on these results, we conclude that the newly presented conservative DG-FEM can at least complement and in some scenarios even outperform the established CG-FEM approaches in EEG or combined MEG/EEG source analysis scenarios, which motivates a further evaluation of DG-FEM for applications in bioelectromagnetism. PMID:29456487
Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan
2018-01-01
The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.
1980-12-05
classification procedures that are common in speech processing. The anesthesia level classification by EEG time series population screening problem example is in...formance. The use of the KL number type metric in NN rule classification, in a delete-one subj ect ’s EE-at-a-time KL-NN and KL- kNN classification of the...17 individual labeled EEG sample population using KL-NN and KL- kNN rules. The results obtained are shown in Table 1. The entries in the table indicate
Artieda, Julio; Iriarte, Jorge
2017-01-01
Idiopathic epilepsy is characterized by generalized seizures with no apparent cause. One of its main problems is the lack of biomarkers to monitor the evolution of patients. The only tools they can use are limited to inspecting the amount of seizures during previous periods of time and assessing the existence of interictal discharges. As a result, there is a need for improving the tools to assist the diagnosis and follow up of these patients. The goal of the present study is to compare and find a way to differentiate between two groups of patients suffering from idiopathic epilepsy, one group that could be followed-up by means of specific electroencephalographic (EEG) signatures (intercritical activity present), and another one that could not due to the absence of these markers. To do that, we analyzed the background EEG activity of each in the absence of seizures and epileptic intercritical activity. We used the Shannon spectral entropy (SSE) as a metric to discriminate between the two groups and performed permutation-based statistical tests to detect the set of frequencies that show significant differences. By constraining the spectral entropy estimation to the [6.25–12.89) Hz range, we detect statistical differences (at below 0.05 alpha-level) between both types of epileptic patients at all available recording channels. Interestingly, entropy values follow a trend that is inversely related to the elapsed time from the last seizure. Indeed, this trend shows asymptotical convergence to the SSE values measured in a group of healthy subjects, which present SSE values lower than any of the two groups of patients. All these results suggest that the SSE, measured in a specific range of frequencies, could serve to follow up the evolution of patients suffering from idiopathic epilepsy. Future studies remain to be conducted in order to assess the predictive value of this approach for the anticipation of seizures. PMID:28922360
NASA Astrophysics Data System (ADS)
Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.
2007-11-01
In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.
Electroencephalogram and Alzheimer's Disease: Clinical and Research Approaches
Tsolaki, Anthoula; Kazis, Dimitrios; Kompatsiaris, Ioannis; Kosmidou, Vasiliki; Tsolaki, Magda
2014-01-01
Alzheimer's disease (AD) is a neurodegenerative disorder that is characterized by cognitive deficits, problems in activities of daily living, and behavioral disturbances. Electroencephalogram (EEG) has been demonstrated as a reliable tool in dementia research and diagnosis. The application of EEG in AD has a wide range of interest. EEG contributes to the differential diagnosis and the prognosis of the disease progression. Additionally such recordings can add important information related to the drug effectiveness. This review is prepared to form a knowledge platform for the project entitled “Cognitive Signal Processing Lab,” which is in progress in Information Technology Institute in Thessaloniki. The team tried to focus on the main research fields of AD via EEG and recent published studies. PMID:24868482
Characterization of network structure in stereoEEG data using consensus-based partial coherence.
Ter Wal, Marije; Cardellicchio, Pasquale; LoRusso, Giorgio; Pelliccia, Veronica; Avanzini, Pietro; Orban, Guy A; Tiesinga, Paul He
2018-06-06
Coherence is a widely used measure to determine the frequency-resolved functional connectivity between pairs of recording sites, but this measure is confounded by shared inputs to the pair. To remove shared inputs, the 'partial coherence' can be computed by conditioning the spectral matrices of the pair on all other recorded channels, which involves the calculation of a matrix (pseudo-) inverse. It has so far remained a challenge to use the time-resolved partial coherence to analyze intracranial recordings with a large number of recording sites. For instance, calculating the partial coherence using a pseudoinverse method produces a high number of false positives when it is applied to a large number of channels. To address this challenge, we developed a new method that randomly aggregated channels into a smaller number of effective channels on which the calculation of partial coherence was based. We obtained a 'consensus' partial coherence (cPCOH) by repeating this approach for several random aggregations of channels (permutations) and only accepting those activations in time and frequency with a high enough consensus. Using model data we show that the cPCOH method effectively filters out the effect of shared inputs and performs substantially better than the pseudo-inverse. We successfully applied the cPCOH procedure to human stereotactic EEG data and demonstrated three key advantages of this method relative to alternative procedures. First, it reduces the number of false positives relative to the pseudo-inverse method. Second, it allows for titration of the amount of false positives relative to the false negatives by adjusting the consensus threshold, thus allowing the data-analyst to prioritize one over the other to meet specific analysis demands. Third, it substantially reduced the number of identified interactions compared to coherence, providing a sparser network of connections from which clear spatial patterns emerged. These patterns can serve as a starting point of further analyses that provide insight into network dynamics during cognitive processes. These advantages likely generalize to other modalities in which shared inputs introduce confounds, such as electroencephalography (EEG) and magneto-encephalography (MEG). Copyright © 2018. Published by Elsevier Inc.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
NASA Astrophysics Data System (ADS)
Rahmouni, Lyes; Adrian, Simon B.; Cools, Kristof; Andriulli, Francesco P.
2018-01-01
In this paper, we present a new discretization strategy for the boundary element formulation of the Electroencephalography (EEG) forward problem. Boundary integral formulations, classically solved with the Boundary Element Method (BEM), are widely used in high resolution EEG imaging because of their recognized advantages, in several real case scenarios, in terms of numerical stability and effectiveness when compared with other differential equation based techniques. Unfortunately, however, it is widely reported in literature that the accuracy of standard BEM schemes for the forward EEG problem is often limited, especially when the current source density is dipolar and its location approaches one of the brain boundary surfaces. This is a particularly limiting problem given that during an high-resolution EEG imaging procedure, several EEG forward problem solutions are required, for which the source currents are near or on top of a boundary surface. This work will first present an analysis of standardly and classically discretized EEG forward problem operators, reporting on a theoretical issue of some of the formulations that have been used so far in the community. We report on the fact that several standardly used discretizations of these formulations are consistent only with an L2-framework, requiring the expansion term to be a square integrable function (i.e., in a Petrov-Galerkin scheme with expansion and testing functions). Instead, those techniques are not consistent when a more appropriate mapping in terms of fractional-order Sobolev spaces is considered. Such a mapping allows the expansion function term to be a less regular function, thus sensibly reducing the need for mesh refinements and low-precisions handling strategies that are currently required. These more favorable mappings, however, require a different and conforming discretization, which must be suitably adapted to them. In order to appropriately fulfill this requirement, we adopt a mixed discretization based on dual boundary elements residing on a suitably defined dual mesh. We devote also a particular attention to implementation-oriented details of our new technique that will allow the rapid incorporation of our finding in one's own EEG forward solution technology. We conclude by showing how the resulting forward EEG problems show favorable properties with respect to previously proposed schemes, and we show their applicability to real-case modeling scenarios obtained from Magnetic Resonance Imaging (MRI) data. xml:lang="fr" Malheureusement, il est également reconnu dans la littérature que leur précision diminue particulièrement lorsque la source de courant est dipolaire et se situe près de la surface du cerveau. Ce défaut constitue une importante limitation, étant donné qu'au cours d'une session d'imagerie EEG à haute résolution, plusieurs solutions du problème direct EEG sont requises, pour lesquelles les sources de courant sont proches ou sur la surface de cerveau. Ce travail présente d'abord une analyse des opérateurs intervenant dans le problème direct et leur discrétisation. Nous montrons que plusieurs discrétisations couramment utilisées ne conviennent que dans un cadre L2, nécessitant que le terme d'expansion soit une fonction de carré intégrable. Dès lors, ces techniques ne sont pas cohérentes avec les propriétés spectrales des opérateurs en termes d'espaces de Sobolev d'ordre fractionnaire. Nous développons ensuite une nouvelle stratégie de discrétisation conforme aux espaces de Sobolev avec des fonctions d'expansion moins régulières, donnant lieu à une nouvelle formulation intégrale. Le solveur résultant présente des propriétés favorables par rapport aux méthodes existantes et réduit sensiblement le recours à un maillage adaptatif et autres stratégies actuellement requises pour améliorer la précision du calcul. Les résultats numériques présentés corroborent les développements théoriques et mettent en évidence l'impact positif de la nouvelle approche.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
Adamaszek, Michael; Khaw, Alexander V.; Buck, Ulrike; Andresen, Burghard; Thomasius, Rainer
2010-01-01
Objective According to previous EEG reports of indicative disturbances in Alpha and Beta activities, a systematic search for distinct EEG abnormalities in a broader population of Ecstasy users may especially corroborate the presumed specific neurotoxicity of Ecstasy in humans. Methods 105 poly-drug consumers with former Ecstasy use and 41 persons with comparable drug history without Ecstasy use, and 11 drug naives were investigated for EEG features. Conventional EEG derivations of 19 electrodes according to the 10-20-system were conducted. Besides standard EEG bands, quantitative EEG analyses of 1-Hz-subdivided power ranges of Alpha, Theta and Beta bands have been considered. Results Ecstasy users with medium and high cumulative Ecstasy doses revealed an increase in Theta and lower Alpha activities, significant increases in Beta activities, and a reduction of background activity. Ecstasy users with low cumulative Ecstasy doses showed a significant Alpha activity at 11 Hz. Interestingly, the spectral power of low frequencies in medium and high Ecstasy users was already significantly increased in the early phase of EEG recording. Statistical analyses suggested the main effect of Ecstasy to EEG results. Conclusions Our data from a major sample of Ecstasy users support previous data revealing alterations of EEG frequency spectrum due rather to neurotoxic effects of Ecstasy on serotonergic systems in more detail. Accordingly, our data may be in line with the observation of attentional and memory impairments in Ecstasy users with moderate to high misuse. Despite the methodological problem of polydrug use also in our approach, our EEG results may be indicative of the neuropathophysiological background of the reported memory and attentional deficits in Ecstasy abusers. Overall, our findings may suggest the usefulness of EEG in diagnostic approaches in assessing neurotoxic sequela of this common drug abuse. PMID:21124854
Length matters: Improved high field EEG-fMRI recordings using shorter EEG cables.
Assecondi, Sara; Lavallee, Christina; Ferrari, Paolo; Jovicich, Jorge
2016-08-30
The use of concurrent EEG-fMRI recordings has increased in recent years, allowing new avenues of medical and cognitive neuroscience research; however, currently used setups present problems with data quality and reproducibility. We propose a compact experimental setup for concurrent EEG-fMRI at 4T and compare it to a more standard reference setup. The compact setup uses short EEG cables connecting to the amplifiers, which are placed right at the back of the head RF coil on a form-fitting extension force-locked to the patient MR bed. We compare the two setups in terms of sensitivity to MR-room environmental noise, interferences between measuring devices (EEG or fMRI), and sensitivity to functional responses in a visual stimulation paradigm. The compact setup reduces the system sensitivity to both external noise and MR-induced artefacts by at least 60%, with negligible EEG noise induced from the mechanical vibrations of the cryogenic cooling compression pump. The compact setup improved EEG data quality and the overall performance of MR-artifact correction techniques. Both setups were similar in terms of the fMRI data, with higher reproducibility for cable placement within the scanner in the compact setup. This improved compact setup may be relevant to MR laboratories interested in reducing the sensitivity of their EEG-fMRI experimental setup to external noise sources, setting up an EEG-fMRI workplace for the first time, or for creating a more reproducible configuration of equipment and cables. Implications for safety and ergonomics are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Kim, Kyungsoo; Punte, Andrea Kleine; Mertens, Griet; Van de Heyning, Paul; Park, Kyung-Joon; Choi, Hongsoo; Choi, Ji-Woong; Song, Jae-Jin
2015-11-30
Quantitative electroencephalography (qEEG) is effective when used to analyze ongoing cortical oscillations in cochlear implant (CI) users. However, localization of cortical activity in such users via qEEG is confounded by the presence of artifacts produced by the device itself. Typically, independent component analysis (ICA) is used to remove CI artifacts in auditory evoked EEG signals collected upon brief stimulation and it is effective for auditory evoked potentials (AEPs). However, AEPs do not reflect the daily environments of patients, and thus, continuous EEG data that are closer to such environments are desirable. In this case, device-related artifacts in EEG data are difficult to remove selectively via ICA due to over-completion of EEG data removal in the absence of preprocessing. EEGs were recorded for a long time under conditions of continuous auditory stimulation. To obviate the over-completion problem, we limited the frequency of CI artifacts to a significant characteristic peak and apply ICA artifact removal. Topographic brain mapping results analyzed via band-limited (BL)-ICA exhibited a better energy distribution, matched to the CI location, than data obtained using conventional ICA. Also, source localization data verified that BL-ICA effectively removed CI artifacts. The proposed method selectively removes CI artifacts from continuous EEG recordings, while ICA removal method shows residual peak and removes important brain activity signals. CI artifacts in EEG data obtained during continuous passive listening can be effectively removed with the aid of BL-ICA, opening up new EEG research possibilities in subjects with CIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Smith, Cynthia L.; Bell, Martha Ann
2013-01-01
Stability in frontal brain electrical activity (i.e., electroencephalographic or EEG) asymmetry at 10 and 24 months was examined with respect to maternal ratings of internalizing and externalizing behaviors at 30 months in a sample of 48 children. Children with stable left frontal EEG asymmetry during infancy were rated higher in externalizing behaviors by their mothers, whereas children with stable right frontal EEG asymmetry were rated higher in internalizing behaviors. These findings highlight the need to focus on the early stability in physiological measures that may be implicated later in developing behavioral problems. PMID:20175143
Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.
2009-01-01
We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925
ERIC Educational Resources Information Center
Dimitriadis, Stavros I.; Kanatsouli, Kassiani; Laskaris, Nikolaos A.; Tsirka, Vasso; Vourkas, Michael; Micheloyannis, Sifis
2012-01-01
Multichannel EEG traces from healthy subjects are used to investigate the brain's self-organisation tendencies during two different mental arithmetic tasks. By making a comparison with a control-state in the form of a classification problem, we can detect and quantify the changes in coordinated brain activity in terms of functional connectivity.…
ERIC Educational Resources Information Center
Fernandez, Thalia; Harmony, Thalia; Mendoza, Omar; Lopez-Alanis, Paula; Marroquin, Jose Luis; Otero, Gloria; Ricardo-Garcell, Josefina
2012-01-01
Learning disabilities (LD) are one of the most frequent problems for elementary school-aged children. In this paper, event-related EEG oscillations to semantically related and unrelated pairs of words were studied in a group of 18 children with LD not otherwise specified (LD-NOS) and in 16 children with normal academic achievement. We propose that…
Stable Sparse Classifiers Identify qEEG Signatures that Predict Learning Disabilities (NOS) Severity
Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B.; Bringas-Vega, Maria L.; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A.
2018-01-01
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented. PMID:29379411
Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B; Bringas-Vega, Maria L; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A
2017-01-01
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented.
Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias
2018-06-01
Inter-areal functional connectivity (FC), neuronal synchronization in particular, is thought to constitute a key systems-level mechanism for coordination of neuronal processing and communication between brain regions. Evidence to support this hypothesis has been gained largely using invasive electrophysiological approaches. In humans, neuronal activity can be non-invasively recorded only with magneto- and electroencephalography (MEG/EEG), which have been used to assess FC networks with high temporal resolution and whole-scalp coverage. However, even in source-reconstructed MEG/EEG data, signal mixing, or "source leakage", is a significant confounder for FC analyses and network localization. Signal mixing leads to two distinct kinds of false-positive observations: artificial interactions (AI) caused directly by mixing and spurious interactions (SI) arising indirectly from the spread of signals from true interacting sources to nearby false loci. To date, several interaction metrics have been developed to solve the AI problem, but the SI problem has remained largely intractable in MEG/EEG all-to-all source connectivity studies. Here, we advance a novel approach for correcting SIs in FC analyses using source-reconstructed MEG/EEG data. Our approach is to bundle observed FC connections into hyperedges by their adjacency in signal mixing. Using realistic simulations, we show here that bundling yields hyperedges with good separability of true positives and little loss in the true positive rate. Hyperedge bundling thus significantly decreases graph noise by minimizing the false-positive to true-positive ratio. Finally, we demonstrate the advantage of edge bundling in the visualization of large-scale cortical networks with real MEG data. We propose that hypergraphs yielded by bundling represent well the set of true cortical interactions that are detectable and dissociable in MEG/EEG connectivity analysis. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Mild Depression Detection of College Students: an EEG-Based Solution with Free Viewing Tasks.
Li, Xiaowei; Hu, Bin; Shen, Ji; Xu, Tingting; Retcliffe, Martyn
2015-12-01
Depression is a common mental disorder with growing prevalence; however current diagnoses of depression face the problem of patient denial, clinical experience and subjective biases from self-report. By using a combination of linear and nonlinear EEG features in our research, we aim to develop a more accurate and objective approach to depression detection that supports the process of diagnosis and assists the monitoring of risk factors. By classifying EEG features during free viewing task, an accuracy of 99.1%, which is the highest to our knowledge by far, was achieved using kNN classifier to discriminate depressed and non-depressed subjects. Furthermore, through correlation analysis, comparisons of performance on each electrode were discussed on the availability of single channel EEG recording depression detection system. Combined with wearable EEG collecting devices, our method offers the possibility of cost effective wearable ubiquitous system for doctors to monitor their patients with depression, and for normal people to understand their mental states in time.
An EEG blind source separation algorithm based on a weak exclusion principle.
Lan Ma; Blu, Thierry; Wang, William S-Y
2016-08-01
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.
Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de; Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT; Brookes, Mike
In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In thismore » paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole sources. • Inverse solutions are based on longitudinal and transverse line integral measurements. • Transverse line integral measurements are used as a sparsity constraint. • Numerical procedure to approximate the line integrals is described in detail. • Patterns of the studied electric fields are correctly estimated.« less
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Sivakumar, Siddharth S; Namath, Amalia G; Galán, Roberto F
2016-01-01
Previous work from our lab has demonstrated how the connectivity of brain circuits constrains the repertoire of activity patterns that those circuits can display. Specifically, we have shown that the principal components of spontaneous neural activity are uniquely determined by the underlying circuit connections, and that although the principal components do not uniquely resolve the circuit structure, they do reveal important features about it. Expanding upon this framework on a larger scale of neural dynamics, we have analyzed EEG data recorded with the standard 10-20 electrode system from 41 neurologically normal children and adolescents during stage 2, non-REM sleep. We show that the principal components of EEG spindles, or sigma waves (10-16 Hz), reveal non-propagating, standing waves in the form of spherical harmonics. We mathematically demonstrate that standing EEG waves exist when the spatial covariance and the Laplacian operator on the head's surface commute. This in turn implies that the covariance between two EEG channels decreases as the inverse of their relative distance; a relationship that we corroborate with empirical data. Using volume conduction theory, we then demonstrate that superficial current sources are more synchronized at larger distances, and determine the characteristic length of large-scale neural synchronization as 1.31 times the head radius, on average. Moreover, consistent with the hypothesis that EEG spindles are driven by thalamo-cortical rather than cortico-cortical loops, we also show that 8 additional patients with hypoplasia or complete agenesis of the corpus callosum, i.e., with deficient or no connectivity between cortical hemispheres, similarly exhibit standing EEG waves in the form of spherical harmonics. We conclude that spherical harmonics are a hallmark of spontaneous, large-scale synchronization of neural activity in the brain, which are associated with unconscious, light sleep. The analogy with spherical harmonics in quantum mechanics suggests that the variances (eigenvalues) of the principal components follow a Boltzmann distribution, or equivalently, that standing waves are in a sort of "thermodynamic" equilibrium during non-REM sleep. By extension, we speculate that consciousness emerges as the brain dynamics deviate from such equilibrium.
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Sivakumar, Siddharth S.; Namath, Amalia G.; Galán, Roberto F.
2016-01-01
Previous work from our lab has demonstrated how the connectivity of brain circuits constrains the repertoire of activity patterns that those circuits can display. Specifically, we have shown that the principal components of spontaneous neural activity are uniquely determined by the underlying circuit connections, and that although the principal components do not uniquely resolve the circuit structure, they do reveal important features about it. Expanding upon this framework on a larger scale of neural dynamics, we have analyzed EEG data recorded with the standard 10–20 electrode system from 41 neurologically normal children and adolescents during stage 2, non-REM sleep. We show that the principal components of EEG spindles, or sigma waves (10–16 Hz), reveal non-propagating, standing waves in the form of spherical harmonics. We mathematically demonstrate that standing EEG waves exist when the spatial covariance and the Laplacian operator on the head's surface commute. This in turn implies that the covariance between two EEG channels decreases as the inverse of their relative distance; a relationship that we corroborate with empirical data. Using volume conduction theory, we then demonstrate that superficial current sources are more synchronized at larger distances, and determine the characteristic length of large-scale neural synchronization as 1.31 times the head radius, on average. Moreover, consistent with the hypothesis that EEG spindles are driven by thalamo-cortical rather than cortico-cortical loops, we also show that 8 additional patients with hypoplasia or complete agenesis of the corpus callosum, i.e., with deficient or no connectivity between cortical hemispheres, similarly exhibit standing EEG waves in the form of spherical harmonics. We conclude that spherical harmonics are a hallmark of spontaneous, large-scale synchronization of neural activity in the brain, which are associated with unconscious, light sleep. The analogy with spherical harmonics in quantum mechanics suggests that the variances (eigenvalues) of the principal components follow a Boltzmann distribution, or equivalently, that standing waves are in a sort of “thermodynamic” equilibrium during non-REM sleep. By extension, we speculate that consciousness emerges as the brain dynamics deviate from such equilibrium. PMID:27445777
Levitt, Joshua; Nitenson, Adam; Koyama, Suguru; Heijmans, Lonne; Curry, James; Ross, Jason T; Kamerling, Steven; Saab, Carl Y
2018-06-23
Electroencephalography (EEG) invariably contains extra-cranial artifacts that are commonly dealt with based on qualitative and subjective criteria. Failure to account for EEG artifacts compromises data interpretation. We have developed a quantitative and automated support vector machine (SVM)-based algorithm to accurately classify artifactual EEG epochs in awake rodent, canine and humans subjects. An embodiment of this method also enables the determination of 'eyes open/closed' states in human subjects. The levels of SVM accuracy for artifact classification in humans, Sprague Dawley rats and beagle dogs were 94.17%, 83.68%, and 85.37%, respectively, whereas 'eyes open/closed' states in humans were labeled with 88.60% accuracy. Each of these results was significantly higher than chance. Comparison with Existing Methods: Other existing methods, like those dependent on Independent Component Analysis, have not been tested in non-human subjects, and require full EEG montages, instead of only single channels, as this method does. We conclude that our EEG artifact detection algorithm provides a valid and practical solution to a common problem in the quantitative analysis and assessment of EEG in pre-clinical research settings across evolutionary spectra. Copyright © 2018. Published by Elsevier B.V.
Filter bank common spatial patterns in mental workload estimation.
Arvaneh, Mahnaz; Umilta, Alberto; Robertson, Ian H
2015-01-01
EEG-based workload estimation technology provides a real time means of assessing mental workload. Such technology can effectively enhance the performance of the human-machine interaction and the learning process. When designing workload estimation algorithms, a crucial signal processing component is the feature extraction step. Despite several studies on this field, the spatial properties of the EEG signals were mostly neglected. Since EEG inherently has a poor spacial resolution, features extracted individually from each EEG channel may not be sufficiently efficient. This problem becomes more pronounced when we use low-cost but convenient EEG sensors with limited stability which is the case in practical scenarios. To address this issue, in this paper, we introduce a filter bank common spatial patterns algorithm combined with a feature selection method to extract spatio-spectral features discriminating different mental workload levels. To evaluate the proposed algorithm, we carry out a comparative analysis between two representative types of working memory tasks using data recorded from an Emotiv EPOC headset which is a mobile low-cost EEG recording device. The experimental results showed that the proposed spatial filtering algorithm outperformed the state-of-the algorithms in terms of the classification accuracy.
FFT transformed quantitative EEG analysis of short term memory load.
Singh, Yogesh; Singh, Jayvardhan; Sharma, Ratna; Talwar, Anjana
2015-07-01
The EEG is considered as building block of functional signaling in the brain. The role of EEG oscillations in human information processing has been intensively investigated. To study the quantitative EEG correlates of short term memory load as assessed through Sternberg memory test. The study was conducted on 34 healthy male student volunteers. The intervention consisted of Sternberg memory test, which runs on a version of the Sternberg memory scanning paradigm software on a computer. Electroencephalography (EEG) was recorded from 19 scalp locations according to 10-20 international system of electrode placement. EEG signals were analyzed offline. To overcome the problems of fixed band system, individual alpha frequency (IAF) based frequency band selection method was adopted. The outcome measures were FFT transformed absolute powers in the six bands at 19 electrode positions. Sternberg memory test served as model of short term memory load. Correlation analysis of EEG during memory task was reflected as decreased absolute power in Upper alpha band in nearly all the electrode positions; increased power in Theta band at Fronto-Temporal region and Lower 1 alpha band at Fronto-Central region. Lower 2 alpha, Beta and Gamma band power remained unchanged. Short term memory load has distinct electroencephalographic correlates resembling the mentally stressed state. This is evident from decreased power in Upper alpha band (corresponding to Alpha band of traditional EEG system) which is representative band of relaxed mental state. Fronto-temporal Theta power changes may reflect the encoding and execution of memory task.
Bennett, Cambell; Voss, Logan J; Barnard, John P M; Sleigh, James W
2009-08-01
Quantitative electroencephalogram (qEEG) monitors are often used to estimate depth of anesthesia and intraoperative recall during general anesthesia. As with any monitor, the processed numerical output is often misleading and has to be interpreted within a clinical context. For the safe clinical use of these monitors, a clear mental picture of the expected raw electroencephalogram (EEG) patterns, as well as a knowledge of the common EEG artifacts, is absolutely necessary. This has provided the motivation to write this tutorial. We describe, and give examples of, the typical EEG features of adequate general anesthesia, effects of noxious stimulation, and adjunctive drugs. Artifacts are commonly encountered and may be classified as arising from outside the head, from the head but outside the brain (commonly frontal electromyogram), or from within the brain (atypical or pathologic). We include real examples of clinical problem-solving processes. In particular, it is important to realize that an artifactually high qEEG index is relatively common and may result in dangerous anesthetic drug overdose. The anesthesiologist must be certain that the qEEG number is consistent with the apparent state of the patient, the doses of various anesthetic drugs, and the degree of surgical stimulation, and that the qEEG number is consistent with the appearance of the raw EEG signal. Any discrepancy must be a stimulus for the immediate critical examination of the patient's state using all the available information rather than reactive therapy to "treat" a number.
NASA Astrophysics Data System (ADS)
Pchelintseva, Svetlana V.; Runnova, Anastasia E.; Musatov, Vyacheslav Yu.; Hramov, Alexander E.
2017-03-01
In the paper we study the problem of recognition type of the observed object, depending on the generated pattern and the registered EEG data. EEG recorded at the time of displaying cube Necker characterizes appropriate state of brain activity. As an image we use bistable image Necker cube. Subject selects the type of cube and interpret it either as aleft cube or as the right cube. To solve the problem of recognition, we use artificial neural networks. In our paper to create a classifier we have considered a multilayer perceptron. We examine the structure of the artificial neural network and define cubes recognition accuracy.
Molina, Vicente; Bachiller, Alejandro; Gomez-Pilar, Javier; Lubeiro, Alba; Hornero, Roberto; Cea-Cañas, Benjamín; Valcárcel, César; Haidar, Mahmoun-Karim; Poza, Jesús
2018-05-01
Spectral entropy (SE) is a measurement from information theory field that provides an estimation of EEG regularity and may be useful as a summary of its spectral properties. Previous studies using small samples reported a deficit of EEG entropy modulation in schizophrenia during cognitive activity. The present study is aimed at replicating this finding in a larger sample, to explore its cognitive and clinical correlates and to discard antipsychotic treatment as the main source of that deficit. We included 64 schizophrenia patients (21 first episodes, FE) and 65 healthy controls. We computed SE during performance of an odd-ball paradigm, at the windows prior (-300 to 0ms) and following (150 to 450ms) stimulus presentation. Modulation of SE was defined as the difference between post- and pre-stimulus windows. In comparison to controls, patients showed a deficit of SE modulation over frontal and central regions, also shown by FE patients. Baseline SE did not differ between patients and controls. Modulation deficit was directly associated with cognitive deficits and negative symptoms, and inversely with positive symptoms. SE modulation was not related to antipsychotic doses. Patients also showed a smaller change of median frequency (i.e., smaller slowing of oscillatory activity) of the EEG from pre- to post-stimulus windows. These results support that a deficit of fast modulation contributes to cognitive deficits and symptoms in schizophrenia patients. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
Mousavi, Seyed Mortaza; Adamoğlu, Ahmet; Demiralp, Tamer; Shayesteh, Mahrokh G
2014-01-01
Awareness during general anesthesia for its serious psychological effects on patients and some juristically problems for anesthetists has been an important challenge during past decades. Monitoring depth of anesthesia is a fundamental solution to this problem. The induction of anesthesia alters frequency and mean of amplitudes of the electroencephalogram (EEG), and its phase couplings. We analyzed EEG changes for phase coupling between delta and alpha subbands using a new algorithm for depth of general anesthesia measurement based on complex wavelet transform (CWT) in patients anesthetized by Propofol. Entropy and histogram of modulated signals were calculated by taking bispectral index (BIS) values as reference. Entropies corresponding to different BIS intervals using Mann-Whitney U test showed that they had different continuous distributions. The results demonstrated that there is a phase coupling between 3 and 4 Hz in delta and 8-9 Hz in alpha subbands and these changes are shown better at the channel T 7 of EEG. Moreover, when BIS values increase, the entropy value of modulated signal also increases and vice versa. In addition, measuring phase coupling between delta and alpha subbands of EEG signals through continuous CWT analysis reveals the depth of anesthesia level. As a result, awareness during anesthesia can be prevented.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
NASA Astrophysics Data System (ADS)
Rahmouni, Lyes; Mitharwal, Rajendra; Andriulli, Francesco P.
2017-11-01
This work presents two new volume integral equations for the Electroencephalography (EEG) forward problem which, differently from the standard integral approaches in the domain, can handle heterogeneities and anisotropies of the head/brain conductivity profiles. The new formulations translate to the quasi-static regime some volume integral equation strategies that have been successfully applied to high frequency electromagnetic scattering problems. This has been obtained by extending, to the volume case, the two classical surface integral formulations used in EEG imaging and by introducing an extra surface equation, in addition to the volume ones, to properly handle boundary conditions. Numerical results corroborate theoretical treatments, showing the competitiveness of our new schemes over existing techniques and qualifying them as a valid alternative to differential equation based methods.
A statistically robust EEG re-referencing procedure to mitigate reference effect
Lepage, Kyle Q.; Kramer, Mark A.; Chu, Catherine J.
2014-01-01
Background The electroencephalogram (EEG) remains the primary tool for diagnosis of abnormal brain activity in clinical neurology and for in vivo recordings of human neurophysiology in neuroscience research. In EEG data acquisition, voltage is measured at positions on the scalp with respect to a reference electrode. When this reference electrode responds to electrical activity or artifact all electrodes are affected. Successful analysis of EEG data often involves re-referencing procedures that modify the recorded traces and seek to minimize the impact of reference electrode activity upon functions of the original EEG recordings. New method We provide a novel, statistically robust procedure that adapts a robust maximum-likelihood type estimator to the problem of reference estimation, reduces the influence of neural activity from the re-referencing operation, and maintains good performance in a wide variety of empirical scenarios. Results The performance of the proposed and existing re-referencing procedures are validated in simulation and with examples of EEG recordings. To facilitate this comparison, channel-to-channel correlations are investigated theoretically and in simulation. Comparison with existing methods The proposed procedure avoids using data contaminated by neural signal and remains unbiased in recording scenarios where physical references, the common average reference (CAR) and the reference estimation standardization technique (REST) are not optimal. Conclusion The proposed procedure is simple, fast, and avoids the potential for substantial bias when analyzing low-density EEG data. PMID:24975291
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Robinson, Katherine M; Ninowski, Jerilyn E
2003-12-01
Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.
EEG-guided meditation: A personalized approach.
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Kallio-Tamminen, Tarja
2015-12-01
The therapeutic potential of meditation for physical and mental well-being is well documented, however the possibility of adverse effects warrants further discussion of the suitability of any particular meditation practice for every given participant. This concern highlights the need for a personalized approach in the meditation practice adjusted for a concrete individual. This can be done by using an objective screening procedure that detects the weak and strong cognitive skills in brain function, thus helping design a tailored meditation training protocol. Quantitative electroencephalogram (qEEG) is a suitable tool that allows identification of individual neurophysiological types. Using qEEG screening can aid developing a meditation training program that maximizes results and minimizes risk of potential negative effects. This brief theoretical-conceptual review provides a discussion of the problem and presents some illustrative results on the usage of qEEG screening for the guidance of mediation personalization. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Physiological artifacts in scalp EEG and ear-EEG.
Kappel, Simon L; Looney, David; Mandic, Danilo P; Kidmose, Preben
2017-08-11
A problem inherent to recording EEG is the interference arising from noise and artifacts. While in a laboratory environment, artifacts and interference can, to a large extent, be avoided or controlled, in real-life scenarios this is a challenge. Ear-EEG is a concept where EEG is acquired from electrodes in the ear. We present a characterization of physiological artifacts generated in a controlled environment for nine subjects. The influence of the artifacts was quantified in terms of the signal-to-noise ratio (SNR) deterioration of the auditory steady-state response. Alpha band modulation was also studied in an open/closed eyes paradigm. Artifacts related to jaw muscle contractions were present all over the scalp and in the ear, with the highest SNR deteriorations in the gamma band. The SNR deterioration for jaw artifacts were in general higher in the ear compared to the scalp. Whereas eye-blinking did not influence the SNR in the ear, it was significant for all groups of scalps electrodes in the delta and theta bands. Eye movements resulted in statistical significant SNR deterioration in both frontal, temporal and ear electrodes. Recordings of alpha band modulation showed increased power and coherence of the EEG for ear and scalp electrodes in the closed-eyes periods. Ear-EEG is a method developed for unobtrusive and discreet recording over long periods of time and in real-life environments. This study investigated the influence of the most important types of physiological artifacts, and demonstrated that spontaneous activity, in terms of alpha band oscillations, could be recorded from the ear-EEG platform. In its present form ear-EEG was more prone to jaw related artifacts and less prone to eye-blinking artifacts compared to state-of-the-art scalp based systems.
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
Multi-subject subspace alignment for non-stationary EEG-based emotion recognition.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Liu, Xin; Liu, Dan; Bai, Ou
2018-01-01
Emotion recognition based on EEG signals is a critical component in Human-Machine collaborative environments and psychiatric health diagnoses. However, EEG patterns have been found to vary across subjects due to user fatigue, different electrode placements, and varying impedances, etc. This problem renders the performance of EEG-based emotion recognition highly specific to subjects, requiring time-consuming individual calibration sessions to adapt an emotion recognition system to new subjects. Recently, domain adaptation (DA) strategies have achieved a great deal success in dealing with inter-subject adaptation. However, most of them can only adapt one subject to another subject, which limits their applicability in real-world scenarios. To alleviate this issue, a novel unsupervised DA strategy called Multi-Subject Subspace Alignment (MSSA) is proposed in this paper, which takes advantage of subspace alignment solution and multi-subject information in a unified framework to build personalized models without user-specific labeled data. Experiments on a public EEG dataset known as SEED verify the effectiveness and superiority of MSSA over other state of the art methods for dealing with multi-subject scenarios.
Automatic and Direct Identification of Blink Components from Scalp EEG
Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun
2013-01-01
Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects. PMID:23959240
Rewards-driven control of robot arm by decoding EEG signals.
Tanwani, Ajay Kumar; del R Millan, Jose; Billard, Aude
2014-01-01
Decoding the user intention from non-invasive EEG signals is a challenging problem. In this paper, we study the feasibility of predicting the goal for controlling the robot arm in self-paced reaching movements, i.e., spontaneous movements that do not require an external cue. Our proposed system continuously estimates the goal throughout a trial starting before the movement onset by online classification and generates optimal trajectories for driving the robot arm to the estimated goal. Experiments using EEG signals of one healthy subject (right arm) yield smooth reaching movements of the simulated 7 degrees of freedom KUKA robot arm in planar center-out reaching task with approximately 80% accuracy of reaching the actual goal.
Use of Multiscale Entropy to Facilitate Artifact Detection in Electroencephalographic Signals
Mariani, Sara; Borges, Ana F. T.; Henriques, Teresa; Goldberger, Ary L.; Costa, Madalena D.
2016-01-01
Electroencephalographic (EEG) signals present a myriad of challenges to analysis, beginning with the detection of artifacts. Prior approaches to noise detection have utilized multiple techniques, including visual methods, independent component analysis and wavelets. However, no single method is broadly accepted, inviting alternative ways to address this problem. Here, we introduce a novel approach based on a statistical physics method, multiscale entropy (MSE) analysis, which quantifies the complexity of a signal. We postulate that noise corrupted EEG signals have lower information content, and, therefore, reduced complexity compared with their noise free counterparts. We test the new method on an open-access database of EEG signals with and without added artifacts due to electrode motion. PMID:26738116
Tenke, Craig E.; Kayser, Jürgen
2012-01-01
The topographic ambiguity and reference-dependency that has plagued EEG/ERP research throughout its history are largely attributable to volume conduction, which may be concisely described by a vector form of Ohm’s Law. This biophysical relationship is common to popular algorithms that infer neuronal generators via inverse solutions. It may be further simplified as Poisson’s source equation, which identifies underlying current generators from estimates of the second spatial derivative of the field potential (Laplacian transformation). Intracranial current source density (CSD) studies have dissected the “cortical dipole” into intracortical sources and sinks, corresponding to physiologically-meaningful patterns of neuronal activity at a sublaminar resolution, much of which is locally cancelled (i.e., closed field). By virtue of the macroscopic scale of the scalp-recorded EEG, a surface Laplacian reflects the radial projections of these underlying currents, representing a unique, unambiguous measure of neuronal activity at scalp. Although the surface Laplacian requires minimal assumptions compared to complex, model-sensitive inverses, the resulting waveform topographies faithfully summarize and simplify essential constraints that must be placed on putative generators of a scalp potential topography, even if they arise from deep or partially-closed fields. CSD methods thereby provide a global empirical and biophysical context for generator localization, spanning scales from intracortical to scalp recordings. PMID:22796039
Children's Understanding of the Arithmetic Concepts of Inversion and Associativity
ERIC Educational Resources Information Center
Robinson, Katherine M.; Ninowski, Jerilyn E.; Gray, Melissa L.
2006-01-01
Previous studies have shown that even preschoolers can solve inversion problems of the form a + b - b by using the knowledge that addition and subtraction are inverse operations. In this study, a new type of inversion problem of the form d x e [divided by] e was also examined. Grade 6 and 8 students solved inversion problems of both types as well…
Michels, Lars; Muthuraman, Muthuraman; Anwar, Abdul R.; Kollias, Spyros; Leh, Sandra E.; Riese, Florian; Unschuld, Paul G.; Siniatchkin, Michael; Gietl, Anton F.; Hock, Christoph
2017-01-01
The assessment of effects associated with cognitive impairment using electroencephalography (EEG) power mapping allows the visualization of frequency-band specific local changes in oscillatory activity. In contrast, measures of coherence and dynamic source synchronization allow for the study of functional and effective connectivity, respectively. Yet, these measures have rarely been assessed in parallel in the context of mild cognitive impairment (MCI) and furthermore it has not been examined if they are related to risk factors of Alzheimer’s disease (AD) such as amyloid deposition and apolipoprotein ε4 (ApoE) allele occurrence. Here, we investigated functional and directed connectivities with Renormalized Partial Directed Coherence (RPDC) in 17 healthy controls (HC) and 17 participants with MCI. Participants underwent ApoE-genotyping and Pittsburgh compound B positron emission tomography (PiB-PET) to assess amyloid deposition. We observed lower spectral source power in MCI in the alpha and beta bands. Coherence was stronger in HC than MCI across different neuronal sources in the delta, theta, alpha, beta and gamma bands. The directed coherence analysis indicated lower information flow between fronto-temporal (including the hippocampus) sources and unidirectional connectivity in MCI. In MCI, alpha and beta RPDC showed an inverse correlation to age and gender; global amyloid deposition was inversely correlated to alpha coherence, RPDC and beta and gamma coherence. Furthermore, the ApoE status was negatively correlated to alpha coherence and RPDC, beta RPDC and gamma coherence. A classification analysis of cognitive state revealed the highest accuracy using EEG power, coherence and RPDC as input. For this small but statistically robust (Bayesian power analyses) sample, our results suggest that resting EEG related functional and directed connectivities are sensitive to the cognitive state and are linked to ApoE and amyloid burden. PMID:29081745
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.
Saletu, B; Anderer, P; Saletu-Zyhlarz, G M; Arnold, O; Pascual-Marqui, R D
2002-01-01
Utilizing computer-assisted quantitative analyses of human scalp-recorded electroencephalogram (EEG) in combination with certain statistical procedures (quantitative pharmaco-EEG) and mapping techniques (pharmaco-EEG mapping), it is possible to classify psychotropic substances and objectively evaluate their bioavailability at the target organ: the human brain. Specifically, one may determine at an early stage of drug development whether a drug is effective on the central nervous system (CNS) compared with placebo, what its clinical efficacy will be like, at which dosage it acts, when it acts and the equipotent dosages of different galenic formulations. Pharmaco-EEG profiles and maps of neuroleptics, antidepressants, tranquilizers, hypnotics, psychostimulants and nootropics/cognition-enhancing drugs will be described in this paper. Methodological problems, as well as the relationships between acute and chronic drug effects, alterations in normal subjects and patients, CNS effects, therapeutic efficacy and pharmacokinetic and pharmacodynamic data will be discussed. In recent times, imaging of drug effects on the regional brain electrical activity of healthy subjects by means of EEG tomography such as low-resolution electromagnetic tomography (LORETA) has been used for identifying brain areas predominantly involved in psychopharmacological action. This will be demonstrated for the representative drugs of the four main psychopharmacological classes, such as 3 mg haloperidol for neuroleptics, 20 mg citalopram for antidepressants, 2 mg lorazepam for tranquilizers and 20 mg methylphenidate for psychostimulants. LORETA demonstrates that these psychopharmacological classes affect brain structures differently.
Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G
2017-08-16
Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.
A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.
Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie
2016-09-01
Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.
Jin, Min Jin; Kim, Ji Sun; Kim, Sungkean; Hyun, Myoung Ho; Lee, Seung-Hwan
2017-01-01
Childhood trauma is known to be related to emotional problems, quantitative electroencephalography (EEG) indices, and heart rate variability (HRV) indices in adulthood, whereas directions among these factors have not been reported yet. This study aimed to evaluate pathway models in young and healthy adults: (1) one with physiological factors first and emotional problems later in adulthood as results of childhood trauma and (2) one with emotional problems first and physiological factors later. A total of 103 non-clinical volunteers were included. Self-reported psychological scales, including the Childhood Trauma Questionnaire (CTQ), State-Trait Anxiety Inventory, Beck Depression Inventory, and Affective Lability Scale were administered. For physiological evaluation, EEG record was performed during resting eyes closed condition in addition to the resting-state HRV, and the quantitative power analyses of eight EEG bands and three HRV components were calculated in the frequency domain. After a normality test, Pearson's correlation analysis to make path models and path analyses to examine them were conducted. The CTQ score was significantly correlated with depression, state and trait anxiety, affective lability, and HRV low-frequency (LF) power. LF power was associated with beta2 (18-22 Hz) power that was related to affective lability. Affective lability was associated with state anxiety, trait anxiety, and depression. Based on the correlation and the hypothesis, two models were composed: a model with pathways from CTQ score to affective lability, and a model with pathways from CTQ score to LF power. The second model showed significantly better fit than the first model (AIC model1 = 63.403 > AIC model2 = 46.003), which revealed that child trauma could affect emotion, and then physiology. The specific directions of relationships among emotions, the EEG, and HRV in adulthood after childhood trauma was discussed.
Jin, Min Jin; Kim, Ji Sun; Kim, Sungkean; Hyun, Myoung Ho; Lee, Seung-Hwan
2018-01-01
Childhood trauma is known to be related to emotional problems, quantitative electroencephalography (EEG) indices, and heart rate variability (HRV) indices in adulthood, whereas directions among these factors have not been reported yet. This study aimed to evaluate pathway models in young and healthy adults: (1) one with physiological factors first and emotional problems later in adulthood as results of childhood trauma and (2) one with emotional problems first and physiological factors later. A total of 103 non-clinical volunteers were included. Self-reported psychological scales, including the Childhood Trauma Questionnaire (CTQ), State–Trait Anxiety Inventory, Beck Depression Inventory, and Affective Lability Scale were administered. For physiological evaluation, EEG record was performed during resting eyes closed condition in addition to the resting-state HRV, and the quantitative power analyses of eight EEG bands and three HRV components were calculated in the frequency domain. After a normality test, Pearson’s correlation analysis to make path models and path analyses to examine them were conducted. The CTQ score was significantly correlated with depression, state and trait anxiety, affective lability, and HRV low-frequency (LF) power. LF power was associated with beta2 (18–22 Hz) power that was related to affective lability. Affective lability was associated with state anxiety, trait anxiety, and depression. Based on the correlation and the hypothesis, two models were composed: a model with pathways from CTQ score to affective lability, and a model with pathways from CTQ score to LF power. The second model showed significantly better fit than the first model (AICmodel1 = 63.403 > AICmodel2 = 46.003), which revealed that child trauma could affect emotion, and then physiology. The specific directions of relationships among emotions, the EEG, and HRV in adulthood after childhood trauma was discussed. PMID:29403401
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
NASA Astrophysics Data System (ADS)
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight into the nature of inverse problems and the appropriate mode of thought, chapter 1 offers historical vignettes, most of which have played an essential role in the development of natural science. These vignettes cover the first successful application of `non-destructive testing' by Archimedes (page 4) via Newton's laws of motion up to literary tomography, and readers will be able to enjoy a wide overview of inverse problems. Therefore, as the author asks, the reader should not skip this chapter. This may not be hard to do, since the headings of the sections are quite intriguing (`Archimedes' Bath', `Another World', `Got the Time?', `Head Games', etc). The author embarks on the technical approach to inverse problems in chapter 2. He has elegantly designed each section with a guide specifying course level, objective, mathematical and scientifical background and appropriate technology (e.g. types of calculators required). The guides are designed such that teachers may be able to construct effective and attractive courses by themselves. The book is not intended to offer one rigidly determined course, but should be used flexibly and independently according to the situation. Moreover, every section closes with activities which can be chosen according to the students' interests and levels of ability. Some of these exercises do not have ready solutions, but require long-term study, so readers are not required to solve all of them. After chapter 5, which contains discrete inverse problems such as the algebraic reconstruction technique and the Backus - Gilbert method, there are answers and commentaries to the activities. Finally, scripts in MATLAB are attached, although they can also be downloaded from the author's web page (http://math.uc.edu/~groetsch/). This book is aimed at students but it will be very valuable to researchers wishing to retain a wide overview of inverse problems in the midst of busy research activities. A Japanese version was published in 2002.
Aspects of Complexity in Sleep Analysis
NASA Astrophysics Data System (ADS)
Leitão, José M. N.; Da Rosa, Agostinho C.
The paper presents a selection of sleep analysis problems where some aspects and concepts of complexity come about. Emphasis is given to the electroencephalogram (EEG) as the most important sleep related variable. The conception of the EEG as a message to be deciphered stresses the importance of the communication and information theories in this field. An optimal detector of K complexes and vertex sharp waves based on a stochastic model of sleep EEG is considered. Besides detecting, the algorithm is also able to follow the evolution of the basic ongoing activity. It is shown that both the ostructure and microstructure of sleep can be described in terms of symbols and interpreted as sentences of a language. Syntactic models and Markov chain representations play in this context an important role.
Nonstationary signal analysis in episodic memory retrieval
NASA Astrophysics Data System (ADS)
Ku, Y. G.; Kawasumi, Masashi; Saito, Masao
2004-04-01
The problem of blind source separation from a mixture that has nonstationarity can be seen in signal processing, speech processing, spectral analysis and so on. This study analyzed EEG signal during episodic memory retrieval using ICA and TVAR. This paper proposes a method which combines ICA and TVAR. The signal from the brain not only exhibits the nonstationary behavior, but also contain artifacts. EEG data at the frontal lobe (F3) from the scalp is collected during the episodic memory retrieval task. The method is applied to EEG data for analysis. The artifact (eye movement) is removed by ICA, and a single burst (around 6Hz) is obtained by TVAR, suggesting that the single burst is related to the brain activity during the episodic memory retrieval.
Wavelet analysis of epileptic spikes
NASA Astrophysics Data System (ADS)
Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.
2003-05-01
Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.
Decoding Individual Finger Movements from One Hand Using Human EEG Signals
Gonzalez, Jania; Ding, Lei
2014-01-01
Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies. PMID:24416360
Probabilistic numerical methods for PDE-constrained Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark
2017-06-01
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains
NASA Astrophysics Data System (ADS)
Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville
2017-01-01
In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.
Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters
2017-03-07
please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics-based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics-based Inverse Problem to Deduce Marine...SUPPLEMENTARY NOTES 14. ABSTRACT This report describes research results related to the development and implementation of an inverse problem approach for
On the use of EEG or MEG brain imaging tools in neuromarketing research.
Vecchiato, Giovanni; Astolfi, Laura; De Vico Fallani, Fabrizio; Toppi, Jlenia; Aloise, Fabio; Bez, Francesco; Wei, Daming; Kong, Wanzeng; Dai, Jounging; Cincotti, Febo; Mattia, Donatella; Babiloni, Fabio
2011-01-01
Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries.
On the Use of EEG or MEG Brain Imaging Tools in Neuromarketing Research
Vecchiato, Giovanni; Astolfi, Laura; De Vico Fallani, Fabrizio; Toppi, Jlenia; Aloise, Fabio; Bez, Francesco; Wei, Daming; Kong, Wanzeng; Dai, Jounging; Cincotti, Febo; Mattia, Donatella; Babiloni, Fabio
2011-01-01
Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries. PMID:21960996
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
Feasibility of imaging epileptic seizure onset with EIT and depth electrodes.
Witkowska-Wrobel, Anna; Aristovich, Kirill; Faulkner, Mayo; Avery, James; Holder, David
2018-06-01
Imaging ictal and interictal activity with Electrical Impedance Tomography (EIT) using intracranial electrode mats has been demonstrated in animal models of epilepsy. In human epilepsy subjects undergoing presurgical evaluation, depth electrodes are often preferred. The purpose of this work was to evaluate the feasibility of using EIT to localise epileptogenic areas with intracranial electrodes in humans. The accuracy of localisation of the ictal onset zone was evaluated in computer simulations using 9M element FEM models derived from three subjects. 5 mm radius perturbations imitating a single seizure onset event were placed in several locations forming two groups: under depth electrode coverage and in the contralateral hemisphere. Simulations were made for impedance changes of 1% expected for neuronal depolarisation over milliseconds and 10% for cell swelling over seconds. Reconstructions were compared with EEG source modelling for a radially orientated dipole with respect to the closest EEG recording contact. The best accuracy of EIT was obtained using all depth and 32 scalp electrodes, greater than the equivalent accuracy with EEG inverse source modelling. The localisation error was 5.2 ± 1.8, 4.3 ± 0 and 46.2 ± 25.8 mm for perturbations within the volume enclosed by depth electrodes and 29.6 ± 38.7, 26.1 ± 36.2, 54.0 ± 26.2 mm for those without (EIT 1%, 10% change, EEG source modelling, n = 15 in 3 subjects, p < 0.01). As EIT was insensitive to source dipole orientation, all 15 perturbations within the volume enclosed by depth electrodes were localised, whereas the standard clinical method of visual inspection of EEG voltages, only localised 8 out of 15 cases. This suggests that adding EIT to SEEG measurements could be beneficial in localising the onset of seizures. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue
2017-01-01
The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.
Reference-Free Removal of EEG-fMRI Ballistocardiogram Artifacts with Harmonic Regression
Krishnaswamy, Pavitra; Bonmassar, Giorgio; Poulsen, Catherine; Pierce, Eric T; Purdon, Patrick L.; Brown, Emery N.
2016-01-01
Combining electroencephalogram (EEG) recording and functional magnetic resonance imaging (fMRI) offers the potential for imaging brain activity with high spatial and temporal resolution. This potential remains limited by the significant ballistocardiogram (BCG) artifacts induced in the EEG by cardiac pulsation-related head movement within the magnetic field. We model the BCG artifact using a harmonic basis, pose the artifact removal problem as a local harmonic regression analysis, and develop an efficient maximum likelihood algorithm to estimate and remove BCG artifacts. Our analysis paradigm accounts for time-frequency overlap between the BCG artifacts and neurophysiologic EEG signals, and tracks the spatiotemporal variations in both the artifact and the signal. We evaluate performance on: simulated oscillatory and evoked responses constructed with realistic artifacts; actual anesthesia-induced oscillatory recordings; and actual visual evoked potential recordings. In each case, the local harmonic regression analysis effectively removes the BCG artifacts, and recovers the neurophysiologic EEG signals. We further show that our algorithm outperforms commonly used reference-based and component analysis techniques, particularly in low SNR conditions, the presence of significant time-frequency overlap between the artifact and the signal, and/or large spatiotemporal variations in the BCG. Because our algorithm does not require reference signals and has low computational complexity, it offers a practical tool for removing BCG artifacts from EEG data recorded in combination with fMRI. PMID:26151100
Kumar, Surendra; Ghosh, Subhojit; Tetarway, Suhash; Sinha, Rakesh Kumar
2015-07-01
In this study, the magnitude and spatial distribution of frequency spectrum in the resting electroencephalogram (EEG) were examined to address the problem of detecting alcoholism in the cerebral motor cortex. The EEG signals were recorded from chronic alcoholic conditions (n = 20) and the control group (n = 20). Data were taken from motor cortex region and divided into five sub-bands (delta, theta, alpha, beta-1 and beta-2). Three methodologies were adopted for feature extraction: (1) absolute power, (2) relative power and (3) peak power frequency. The dimension of the extracted features is reduced by linear discrimination analysis and classified by support vector machine (SVM) and fuzzy C-mean clustering. The maximum classification accuracy (88 %) with SVM clustering was achieved with the EEG spectral features with absolute power frequency on F4 channel. Among the bands, relatively higher classification accuracy was found over theta band and beta-2 band in most of the channels when computed with the EEG features of relative power. Electrodes wise CZ, C3 and P4 were having more alteration. Considering the good classification accuracy obtained by SVM with relative band power features in most of the EEG channels of motor cortex, it can be suggested that the noninvasive automated online diagnostic system for the chronic alcoholic condition can be developed with the help of EEG signals.
EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks.
Berka, Chris; Levendowski, Daniel J; Lumicao, Michelle N; Yau, Alan; Davis, Gene; Zivkovic, Vladimir T; Olmstead, Richard E; Tremoulet, Patrice D; Craven, Patrick L
2007-05-01
The ability to continuously and unobtrusively monitor levels of task engagement and mental workload in an operational environment could be useful in identifying more accurate and efficient methods for humans to interact with technology. This information could also be used to optimize the design of safer, more efficient work environments that increase motivation and productivity. The present study explored the feasibility of monitoring electroencephalo-graphic (EEG) indices of engagement and workload acquired unobtrusively and quantified during performance of cognitive tests. EEG was acquired from 80 healthy participants with a wireless sensor headset (F3-F4,C3-C4,Cz-POz,F3-Cz,Fz-C3,Fz-POz) during tasks including: multi-level forward/backward-digit-span, grid-recall, trails, mental-addition, 20-min 3-Choice Vigilance, and image-learning and memory tests. EEG metrics for engagement and workload were calculated for each 1 -s of EEG. Across participants, engagement but not workload decreased over the 20-min vigilance test. Engagement and workload were significantly increased during the encoding period of verbal and image-learning and memory tests when compared with the recognition/ recall period. Workload but not engagement increased linearly as level of difficulty increased in forward and backward-digit-span, grid-recall, and mental-addition tests. EEG measures correlated with both subjective and objective performance metrics. These data in combination with previous studies suggest that EEG engagement reflects information-gathering, visual processing, and allocation of attention. EEG workload increases with increasing working memory load and during problem solving, integration of information, analytical reasoning, and may be more reflective of executive functions. Inspection of EEG on a second-by-second timescale revealed associations between workload and engagement levels when aligned with specific task events providing preliminary evidence that second-by-second classifications reflect parameters of task performance.
A multimodal approach to estimating vigilance using EEG and forehead EOG.
Zheng, Wei-Long; Lu, Bao-Liang
2017-04-01
Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.
Sauleau, Paul; Despatin, Jane; Cheng, Xufei; Lemesle, Martine; Touzery-de Villepin, Anne; N'Guyen The Tich, Sylvie; Kubis, Nathalie
2016-04-01
Assessment of current practice and the need for tele-transmission and remote interpretation of EEG in France. Transmission of EEG to a distant center could be a promising solution to the problem of decreasing availability of neurophysiologists for EEG interpretation, in order to provide equity within health care services in France. This practice should logically follow the legal framework of telemedicine and the recommendations that were recently edited by the Société de neurophysiologie clinique de langue française (SNCLF) and the Ligue française contre l'épilepsie (LCFE). A national survey was designed and performed under the auspices of the SNCLF. This survey reveals that there is an important gap between the official recommendations and the "reality on the ground". These local organizations were mainly established through the impulse of individual initiatives, rarely driven by health regulatory authorities and sometimes far from legal frameworks. For the majority, they result from a need to improve medical care, especially in pediatrics and neonatology, and to ensure continuity of care. When present, tele-transmission of EEG is often only partially satisfactory, since many technical procedures have to be improved. Conversely, the lack of tele-transmission of EEG would penalize medical care for some patients. The survey shows both the wealth of local initiatives and the fragility of most existing networks, emphasizing the need for better cooperation between regulatory authorities and health care professionals to establish or improve the transmission of EEG in France. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note.
Burgess, Adrian P
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated.
Wang, Yijun; Wang, Yu-Te; Jung, Tzyy-Ping
2012-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) often use spatial filters to improve signal-to-noise ratio of task-related EEG activities. To obtain robust spatial filters, large amounts of labeled data, which are often expensive and labor-intensive to obtain, need to be collected in a training procedure before online BCI control. Several studies have recently developed zero-training methods using a session-to-session scenario in order to alleviate this problem. To our knowledge, a state-to-state translation, which applies spatial filters derived from one state to another, has never been reported. This study proposes a state-to-state, zero-training method to construct spatial filters for extracting EEG changes induced by motor imagery. Independent component analysis (ICA) was separately applied to the multi-channel EEG in the resting and the motor imagery states to obtain motor-related spatial filters. The resultant spatial filters were then applied to single-trial EEG to differentiate left- and right-hand imagery movements. On a motor imagery dataset collected from nine subjects, comparable classification accuracies were obtained by using ICA-based spatial filters derived from the two states (motor imagery: 87.0%, resting: 85.9%), which were both significantly higher than the accuracy achieved by using monopolar scalp EEG data (80.4%). The proposed method considerably increases the practicality of BCI systems in real-world environments because it is less sensitive to electrode misalignment across different sessions or days and does not require annotated pilot data to derive spatial filters. PMID:22666377
Poon, W B; Tagamolila, V; Toh, Y P; Cheng, Z R
2015-03-01
Various meta-analyses have shown that e-learning is as effective as traditional methods of continuing professional education. However, there are some disadvantages to e-learning, such as possible technical problems, the need for greater self-discipline, cost involved in developing programmes and limited direct interaction. Currently, most strategies for teaching amplitude-integrated electroencephalography (aEEG) in neonatal intensive care units (NICUs) worldwide depend on traditional teaching methods. We implemented a programme that utilised an integrated approach to e-learning. The programme consisted of three sessions of supervised protected time e-learning in an NICU. The objective and subjective effectiveness of the approach was assessed through surveys administered to participants before and after the programme. A total of 37 NICU staff (32 nurses and 5 doctors) participated in the study. 93.1% of the participants appreciated the need to acquire knowledge of aEEG. We also saw a statistically significant improvement in the subjective knowledge score (p = 0.041) of the participants. The passing rates for identifying abnormal aEEG tracings (defined as ≥ 3 correct answers out of 5) also showed a statistically significant improvement (from 13.6% to 81.8%, p < 0.001). Among the participants who completed the survey, 96.0% felt the teaching was well structured, 77.8% felt the duration was optimal, 80.0% felt that they had learnt how to systematically interpret aEEGs, and 70.4% felt that they could interpret normal aEEG with confidence. An integrated approach to e-learning can help improve subjective and objective knowledge of aEEG.
A Forward Glimpse into Inverse Problems through a Geology Example
ERIC Educational Resources Information Center
Winkel, Brian J.
2012-01-01
This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)
Spontaneous brain activity as a source of ideal 1/f noise
NASA Astrophysics Data System (ADS)
Allegrini, Paolo; Menicucci, Danilo; Bedini, Remo; Fronzoni, Leone; Gemignani, Angelo; Grigolini, Paolo; West, Bruce J.; Paradisi, Paolo
2009-12-01
We study the electroencephalogram (EEG) of 30 closed-eye awake subjects with a technique of analysis recently proposed to detect punctual events signaling rapid transitions between different metastable states. After single-EEG-channel event detection, we study global properties of events simultaneously occurring among two or more electrodes termed coincidences. We convert the coincidences into a diffusion process with three distinct rules that can yield the same μ only in the case where the coincidences are driven by a renewal process. We establish that the time interval between two consecutive renewal events driving the coincidences has a waiting-time distribution with inverse power-law index μ≈2 corresponding to ideal 1/f noise. We argue that this discovery, shared by all subjects of our study, supports the conviction that 1/f noise is an optimal communication channel for complex networks as in art or language and may therefore be the channel through which the brain influences complex processes and is influenced by them.
Babiloni, F; Cherubino, P; Graziani, I; Trettel, A; Infarinato, F; Picconi, D; Borghini, G; Maglione, A G; Mattia, D; Vecchiato, G
2013-01-01
Neuroaesthetic is a scientific discipline founded more than a decade ago and it refers to the study of the neural bases of beauty perception in art. The aim of this paper is to investigate the neuroelectrical correlates of brain activity of the observation of real paintings showed in a national fine arts gallery (Scuderie del Quirinale) in Rome, Italy. In fact, the present study was designed to examine how motivational factors as indexed by EEG asymmetry over the prefrontal cortex (relative activity of the left and right hemispheres) could be related to the experience of viewing a series of figurative paintings. The fine arts gallery was visited by a group of 25 subjects during an exhibition of the XVII century Dutch painters. Results suggested a strict correlation of the estimated EEG asymmetry with the verbal pleasantness scores reported by the subjects (p<0,05) and an inverse correlation of the perceived pleasantness with the observed painting's surface dimensions (p<0,002).
Ambulatory REACT: real-time seizure detection with a DSP microprocessor.
McEvoy, Robert P; Faul, Stephen; Marnane, William P
2010-01-01
REACT (Real-Time EEG Analysis for event deteCTion) is a Support Vector Machine based technology which, in recent years, has been successfully applied to the problem of automated seizure detection in both adults and neonates. This paper describes the implementation of REACT on a commercial DSP microprocessor; the Analog Devices Blackfin®. The primary aim of this work is to develop a prototype system for use in ambulatory or in-ward automated EEG analysis. Furthermore, the complexity of the various stages of the REACT algorithm on the Blackfin processor is analysed; in particular the EEG feature extraction stages. This hardware profile is used to select a reduced, platform-aware feature set, in order to evaluate the seizure classification accuracy of a lower-complexity, lower-power REACT system.
A novel unsupervised spike sorting algorithm for intracranial EEG.
Yadav, R; Shah, A K; Loeb, J A; Swamy, M N S; Agarwal, R
2011-01-01
This paper presents a novel, unsupervised spike classification algorithm for intracranial EEG. The method combines template matching and principal component analysis (PCA) for building a dynamic patient-specific codebook without a priori knowledge of the spike waveforms. The problem of misclassification due to overlapping classes is resolved by identifying similar classes in the codebook using hierarchical clustering. Cluster quality is visually assessed by projecting inter- and intra-clusters onto a 3D plot. Intracranial EEG from 5 patients was utilized to optimize the algorithm. The resulting codebook retains 82.1% of the detected spikes in non-overlapping and disjoint clusters. Initial results suggest a definite role of this method for both rapid review and quantitation of interictal spikes that could enhance both clinical treatment and research studies on epileptic patients.
The FieldTrip-SimBio pipeline for EEG forward solutions.
Vorwerk, Johannes; Oostenveld, Robert; Piastra, Maria Carla; Magyari, Lilla; Wolters, Carsten H
2018-03-27
Accurately solving the electroencephalography (EEG) forward problem is crucial for precise EEG source analysis. Previous studies have shown that the use of multicompartment head models in combination with the finite element method (FEM) can yield high accuracies both numerically and with regard to the geometrical approximation of the human head. However, the workload for the generation of multicompartment head models has often been too high and the use of publicly available FEM implementations too complicated for a wider application of FEM in research studies. In this paper, we present a MATLAB-based pipeline that aims to resolve this lack of easy-to-use integrated software solutions. The presented pipeline allows for the easy application of five-compartment head models with the FEM within the FieldTrip toolbox for EEG source analysis. The FEM from the SimBio toolbox, more specifically the St. Venant approach, was integrated into the FieldTrip toolbox. We give a short sketch of the implementation and its application, and we perform a source localization of somatosensory evoked potentials (SEPs) using this pipeline. We then evaluate the accuracy that can be achieved using the automatically generated five-compartment hexahedral head model [skin, skull, cerebrospinal fluid (CSF), gray matter, white matter] in comparison to a highly accurate tetrahedral head model that was generated on the basis of a semiautomatic segmentation with very careful and time-consuming manual corrections. The source analysis of the SEP data correctly localizes the P20 component and achieves a high goodness of fit. The subsequent comparison to the highly detailed tetrahedral head model shows that the automatically generated five-compartment head model performs about as well as a highly detailed four-compartment head model (skin, skull, CSF, brain). This is a significant improvement in comparison to a three-compartment head model, which is frequently used in praxis, since the importance of modeling the CSF compartment has been shown in a variety of studies. The presented pipeline facilitates the use of five-compartment head models with the FEM for EEG source analysis. The accuracy with which the EEG forward problem can thereby be solved is increased compared to the commonly used three-compartment head models, and more reliable EEG source reconstruction results can be obtained.
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
An inverse problem in thermal imaging
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Caudill, Lester F., Jr.
1994-01-01
This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.
Inverse problems in quantum chemistry
NASA Astrophysics Data System (ADS)
Karwowski, Jacek
Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.
Application of a stochastic inverse to the geophysical inverse problem
NASA Technical Reports Server (NTRS)
Jordan, T. H.; Minster, J. B.
1972-01-01
The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
Prestimulus delta and theta contributions to equiprobable Go/NoGo processing in healthy ageing.
De Blasio, Frances M; Barry, Robert J
2018-05-15
Ongoing EEG activity contributes to ERP outcomes of stimulus processing, and each of these measures is known to undergo (sometimes significant) age-related change. Variation in their relationship across the life-span may thus elucidate mechanisms of normal and pathological ageing. This study assessed the relationships between low-frequency EEG prestimulus brain states, the ERP, and behavioural outcomes in a simple equiprobable auditory Go/NoGo paradigm, comparing these for 20 young (M age = 20.4 years) and 20 healthy older (M age = 68.2 years) adults. Prestimulus delta and theta amplitudes were separately assessed; these were each dominant across the midline region, and reduced in the older adults. For each band, (within-subjects) trials were sorted into ten increasing prestimulus EEG levels for which separate ERPs were derived. The set of ten ERPs for each band-sort was then quantified by PCA, independently for each group (young, older adults). Four components were primarily assessed (P1, N1-1, P2/N2b complex, and P3), with each showing age-related change. Mean RT was comparable, but intra-individual RT variability increased in older adults. Prestimulus delta and theta each generally modulated component positivity, indicating broad influence on task processing. Prestimulus delta was primarily associated with the early sensory processes, and theta more with the later stimulus-specific processes; prestimulus theta also inversely modulated intra-individual RT variability across the groups. These prestimulus EEG-ERP dynamics were consistent between the young and older adults in each band for all components except the P2/N2b, suggesting that across the lifespan, Go/NoGo categorisation is differentially affected by prestimulus delta and theta. Copyright © 2017. Published by Elsevier B.V.
Ethanol modulates cortical activity: direct evidence with combined TMS and EEG.
Kähkönen, S; Kesäniemi, M; Nikouline, V V; Karhu, J; Ollikainen, M; Holi, M; Ilmoniemi, R J
2001-08-01
The motor cortex of 10 healthy subjects was stimulated by transcranial magnetic stimulation (TMS) before and after ethanol challenge (0.8 g/kg resulting in blood concentration of 0.77 +/- 0.14 ml/liter). The electrical brain activity resulting from the brief electromagnetic pulse was recorded with high-resolution electroencephalography (EEG) and located using inversion algorithms. Focal magnetic pulses to the left motor cortex were delivered with a figure-of-eight coil at the random interstimulus interval of 1.5-2.5 s. The stimulation intensity was adjusted to the motor threshold of abductor digiti minimi. Two conditions before and after ethanol ingestion (30 min) were applied: (1) real TMS, with the coil pressed against the scalp; and (2) control condition, with the coil separated from the scalp by a 2-cm-thick piece of plastic. A separate EMG control recording of one subject during TMS was made with two bipolar platinum needle electrodes inserted to the left temporal muscle. In each condition, 120 pulses were delivered. The EEG was recorded from 60 scalp electrodes. A peak in the EEG signals was observed at 43 ms after the TMS pulse in the real-TMS condition but not in the control condition or in the control scalp EMG. Potential maps before and after ethanol ingestion were significantly different from each other (P = 0.01), but no differences were found in the control condition. Ethanol changed the TMS-evoked potentials over right frontal and left parietal areas, the underlying effect appearing to be largest in the right prefrontal area. Our findings suggest that ethanol may have changed the functional connectivity between prefrontal and motor cortices. This new noninvasive method provides direct evidence about the modulation of cortical connectivity after ethanol challenge. Copyright 2001 Academic Press.
NASA Astrophysics Data System (ADS)
Cheng, Jin; Hon, Yiu-Chung; Seo, Jin Keun; Yamamoto, Masahiro
2005-01-01
The Second International Conference on Inverse Problems: Recent Theoretical Developments and Numerical Approaches was held at Fudan University, Shanghai from 16-21 June 2004. The first conference in this series was held at the City University of Hong Kong in January 2002 and it was agreed to hold the conference once every two years in a Pan-Pacific Asian country. The next conference is scheduled to be held at Hokkaido University, Sapporo, Japan in July 2006. The purpose of this series of biennial conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries. In recent decades, interest in inverse problems has been flourishing all over the globe because of both the theoretical interest and practical requirements. In particular, in Asian countries, one is witnessing remarkable new trends of research in inverse problems as well as the participation of many young talents. Considering these trends, the second conference was organized with the chairperson Professor Li Tat-tsien (Fudan University), in order to provide forums for developing research cooperation and to promote activities in the field of inverse problems. Because solutions to inverse problems are needed in various applied fields, we entertained a total of 92 participants at the second conference and arranged various talks which ranged from mathematical analyses to solutions of concrete inverse problems in the real world. This volume contains 18 selected papers, all of which have undergone peer review. The 18 papers are classified as follows: Surveys: four papers give reviews of specific inverse problems. Theoretical aspects: six papers investigate the uniqueness, stability, and reconstruction schemes. Numerical methods: four papers devise new numerical methods and their applications to inverse problems. Solutions to applied inverse problems: four papers discuss concrete inverse problems such as scattering problems and inverse problems in atmospheric sciences and oceanography. Last but not least is our gratitude. As editors we would like to express our sincere thanks to all the plenary and invited speakers, the members of the International Scientific Committee and the Advisory Board for the success of the conference, which has given rise to this present volume of selected papers. We would also like to thank Mr Wang Yanbo, Miss Wan Xiqiong and the graduate students at Fudan University for their effective work to make this conference a success. The conference was financially supported by the NFS of China, the Mathematical Center of Ministry of Education of China, E-Institutes of Shanghai Municipal Education Commission (No E03004) and Fudan University, Grant 15340027 from the Japan Society for the Promotion of Science, and Grant 15654015 from the Ministry of Education, Cultures, Sports and Technology.
Riemann–Hilbert problem approach for two-dimensional flow inverse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow
2014-10-15
We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.
Mutual information measures applied to EEG signals for sleepiness characterization.
Melia, Umberto; Guaita, Marc; Vallverdú, Montserrat; Embid, Cristina; Vilaseca, Isabel; Salamero, Manel; Santamaria, Joan
2015-03-01
Excessive daytime sleepiness (EDS) is one of the main symptoms of several sleep related disorders with a great impact on the patient lives. While many studies have been carried out in order to assess daytime sleepiness, the automatic EDS detection still remains an open problem. In this work, a novel approach to this issue based on non-linear dynamical analysis of EEG signal was proposed. Multichannel EEG signals were recorded during five maintenance of wakefulness (MWT) and multiple sleep latency (MSLT) tests alternated throughout the day from patients suffering from sleep disordered breathing. A group of 20 patients with excessive daytime sleepiness (EDS) was compared with a group of 20 patients without daytime sleepiness (WDS), by analyzing 60-s EEG windows in waking state. Measures obtained from cross-mutual information function (CMIF) and auto-mutual-information function (AMIF) were calculated in the EEG. These functions permitted a quantification of the complexity properties of the EEG signal and the non-linear couplings between different zones of the scalp. Statistical differences between EDS and WDS groups were found in β band during MSLT events (p-value < 0.0001). WDS group presented more complexity than EDS in the occipital zone, while a stronger nonlinear coupling between occipital and frontal zones was detected in EDS patients than in WDS. The AMIF and CMIF measures yielded sensitivity and specificity above 80% and AUC of ROC above 0.85 in classifying EDS and WDS patients. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Deep learning for EEG-Based preference classification
NASA Astrophysics Data System (ADS)
Teo, Jason; Hou, Chew Lin; Mountstephens, James
2017-10-01
Electroencephalogram (EEG)-based emotion classification is rapidly becoming one of the most intensely studied areas of brain-computer interfacing (BCI). The ability to passively identify yet accurately correlate brainwaves with our immediate emotions opens up truly meaningful and previously unattainable human-computer interactions such as in forensic neuroscience, rehabilitative medicine, affective entertainment and neuro-marketing. One particularly useful yet rarely explored areas of EEG-based emotion classification is preference recognition [1], which is simply the detection of like versus dislike. Within the limited investigations into preference classification, all reported studies were based on musically-induced stimuli except for a single study which used 2D images. The main objective of this study is to apply deep learning, which has been shown to produce state-of-the-art results in diverse hard problems such as in computer vision, natural language processing and audio recognition, to 3D object preference classification over a larger group of test subjects. A cohort of 16 users was shown 60 bracelet-like objects as rotating visual stimuli on a computer display while their preferences and EEGs were recorded. After training a variety of machine learning approaches which included deep neural networks, we then attempted to classify the users' preferences for the 3D visual stimuli based on their EEGs. Here, we show that that deep learning outperforms a variety of other machine learning classifiers for this EEG-based preference classification task particularly in a highly challenging dataset with large inter- and intra-subject variability.
Methods for artifact detection and removal from scalp EEG: A review.
Islam, Md Kafiul; Rastegarnia, Amir; Yang, Zhi
2016-11-01
Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.
Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng
2016-02-01
This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.
Krachunov, Sammy; Casson, Alexander J.
2016-01-01
Electroencephalography (EEG) is a procedure that records brain activity in a non-invasive manner. The cost and size of EEG devices has decreased in recent years, facilitating a growing interest in wearable EEG that can be used out-of-the-lab for a wide range of applications, from epilepsy diagnosis, to stroke rehabilitation, to Brain-Computer Interfaces (BCI). A major obstacle for these emerging applications is the wet electrodes, which are used as part of the EEG setup. These electrodes are attached to the human scalp using a conductive gel, which can be uncomfortable to the subject, causes skin irritation, and some gels have poor long-term stability. A solution to this problem is to use dry electrodes, which do not require conductive gel, but tend to have a higher noise floor. This paper presents a novel methodology for the design and manufacture of such dry electrodes. We manufacture the electrodes using low cost desktop 3D printers and off-the-shelf components for the first time. This allows quick and inexpensive electrode manufacturing and opens the possibility of creating electrodes that are customized for each individual user. Our 3D printed electrodes are compared against standard wet electrodes, and the performance of the proposed electrodes is suitable for BCI applications, despite the presence of additional noise. PMID:27706094
3D Printed Dry EEG Electrodes.
Krachunov, Sammy; Casson, Alexander J
2016-10-02
Electroencephalography (EEG) is a procedure that records brain activity in a non-invasive manner. The cost and size of EEG devices has decreased in recent years, facilitating a growing interest in wearable EEG that can be used out-of-the-lab for a wide range of applications, from epilepsy diagnosis, to stroke rehabilitation, to Brain-Computer Interfaces (BCI). A major obstacle for these emerging applications is the wet electrodes, which are used as part of the EEG setup. These electrodes are attached to the human scalp using a conductive gel, which can be uncomfortable to the subject, causes skin irritation, and some gels have poor long-term stability. A solution to this problem is to use dry electrodes, which do not require conductive gel, but tend to have a higher noise floor. This paper presents a novel methodology for the design and manufacture of such dry electrodes. We manufacture the electrodes using low cost desktop 3D printers and off-the-shelf components for the first time. This allows quick and inexpensive electrode manufacturing and opens the possibility of creating electrodes that are customized for each individual user. Our 3D printed electrodes are compared against standard wet electrodes, and the performance of the proposed electrodes is suitable for BCI applications, despite the presence of additional noise.
EEG Dynamics Reflect the Distinct Cognitive Process of Optic Problem Solving
She, Hsiao-Ching; Jung, Tzyy-Ping; Chou, Wen-Chi; Huang, Li-Yu; Wang, Chia-Yu; Lin, Guan-Yu
2012-01-01
This study explores the changes in electroencephalographic (EEG) activity associated with the performance of solving an optics maze problem. College students (N = 37) were instructed to construct three solutions to the optical maze in a Web-based learning environment, which required some knowledge of physics. The subjects put forth their best effort to minimize the number of convexes and mirrors needed to guide the image of an object from the entrance to the exit of the maze. This study examines EEG changes in different frequency bands accompanying varying demands on the cognitive process of providing solutions. Results showed that the mean power of θ, α1, α2, and β1 significantly increased as the number of convexes and mirrors used by the students decreased from solution 1 to 3. Moreover, the mean power of θ and α1 significantly increased when the participants constructed their personal optimal solution (the least total number of mirrors and lens used by students) compared to their non-personal optimal solution. In conclusion, the spectral power of frontal, frontal midline and posterior theta, posterior alpha, and temporal beta increased predominantly as the task demands and task performance increased. PMID:22815800
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
Poon, Woei Bing; Tagamolila, Vina; Toh, Ying Pin Anne; Cheng, Zai Ru
2015-01-01
INTRODUCTION Various meta-analyses have shown that e-learning is as effective as traditional methods of continuing professional education. However, there are some disadvantages to e-learning, such as possible technical problems, the need for greater self-discipline, cost involved in developing programmes and limited direct interaction. Currently, most strategies for teaching amplitude-integrated electroencephalography (aEEG) in neonatal intensive care units (NICUs) worldwide depend on traditional teaching methods. METHODS We implemented a programme that utilised an integrated approach to e-learning. The programme consisted of three sessions of supervised protected time e-learning in an NICU. The objective and subjective effectiveness of the approach was assessed through surveys administered to participants before and after the programme. RESULTS A total of 37 NICU staff (32 nurses and 5 doctors) participated in the study. 93.1% of the participants appreciated the need to acquire knowledge of aEEG. We also saw a statistically significant improvement in the subjective knowledge score (p = 0.041) of the participants. The passing rates for identifying abnormal aEEG tracings (defined as ≥ 3 correct answers out of 5) also showed a statistically significant improvement (from 13.6% to 81.8%, p < 0.001). Among the participants who completed the survey, 96.0% felt the teaching was well structured, 77.8% felt the duration was optimal, 80.0% felt that they had learnt how to systematically interpret aEEGs, and 70.4% felt that they could interpret normal aEEG with confidence. CONCLUSION An integrated approach to e-learning can help improve subjective and objective knowledge of aEEG. PMID:25820847
On the interpretation of synchronization in EEG hyperscanning studies: a cautionary note
Burgess, Adrian P.
2013-01-01
EEG Hyperscanning is a method for studying two or more individuals simultaneously with the objective of elucidating how co-variations in their neural activity (i.e., hyperconnectivity) are influenced by their behavioral and social interactions. The aim of this study was to compare the performance of different hyper-connectivity measures using (i) simulated data, where the degree of coupling could be systematically manipulated, and (ii) individually recorded human EEG combined into pseudo-pairs of participants where no hyper-connections could exist. With simulated data we found that each of the most widely used measures of hyperconnectivity were biased and detected hyper-connections where none existed. With pseudo-pairs of human data we found spurious hyper-connections that arose because there were genuine similarities between the EEG recorded from different people independently but under the same experimental conditions. Specifically, there were systematic differences between experimental conditions in terms of the rhythmicity of the EEG that were common across participants. As any imbalance between experimental conditions in terms of stimulus presentation or movement may affect the rhythmicity of the EEG, this problem could apply in many hyperscanning contexts. Furthermore, as these spurious hyper-connections reflected real similarities between the EEGs, they were not Type-1 errors that could be overcome by some appropriate statistical control. However, some measures that have not previously been used in hyperconnectivity studies, notably the circular correlation co-efficient (CCorr), were less susceptible to detecting spurious hyper-connections of this type. The reason for this advantage in performance is discussed and the use of the CCorr as an alternative measure of hyperconnectivity is advocated. PMID:24399948
Su, Kyung-Min; Hairston, W David; Robbins, Kay
2018-01-01
In controlled laboratory EEG experiments, researchers carefully mark events and analyze subject responses time-locked to these events. Unfortunately, such markers may not be available or may come with poor timing resolution for experiments conducted in less-controlled naturalistic environments. We present an integrated event-identification method for identifying particular responses that occur in unlabeled continuously recorded EEG signals based on information from recordings of other subjects potentially performing related tasks. We introduce the idea of timing slack and timing-tolerant performance measures to deal with jitter inherent in such non-time-locked systems. We have developed an implementation available as an open-source MATLAB toolbox (http://github.com/VisLab/EEG-Annotate) and have made test data available in a separate data note. We applied the method to identify visual presentation events (both target and non-target) in data from an unlabeled subject using labeled data from other subjects with good sensitivity and specificity. The method also identified actual visual presentation events in the data that were not previously marked in the experiment. Although the method uses traditional classifiers for initial stages, the problem of identifying events based on the presence of stereotypical EEG responses is the converse of the traditional stimulus-response paradigm and has not been addressed in its current form. In addition to identifying potential events in unlabeled or incompletely labeled EEG, these methods also allow researchers to investigate whether particular stereotypical neural responses are present in other circumstances. Timing-tolerance has the added benefit of accommodating inter- and intra- subject timing variations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
A multimodal approach to estimating vigilance using EEG and forehead EOG
NASA Astrophysics Data System (ADS)
Zheng, Wei-Long; Lu, Bao-Liang
2017-04-01
Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. Approach. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.
Spyrou, Loukianos; Martín-Lopez, David; Valentín, Antonio; Alarcón, Gonzalo; Sanei, Saeid
2016-06-01
Interictal epileptiform discharges (IEDs) are transient neural electrical activities that occur in the brain of patients with epilepsy. A problem with the inspection of IEDs from the scalp electroencephalogram (sEEG) is that for a subset of epileptic patients, there are no visually discernible IEDs on the scalp, rendering the above procedures ineffective, both for detection purposes and algorithm evaluation. On the other hand, intracranially placed electrodes yield a much higher incidence of visible IEDs as compared to concurrent scalp electrodes. In this work, we utilize concurrent scalp and intracranial EEG (iEEG) from a group of temporal lobe epilepsy (TLE) patients with low number of scalp-visible IEDs. The aim is to determine whether by considering the timing information of the IEDs from iEEG, the resulting concurrent sEEG contains enough information for the IEDs to be reliably distinguished from non-IED segments. We develop an automatic detection algorithm which is tested in a leave-subject-out fashion, where each test subject's detection algorithm is based on the other patients' data. The algorithm obtained a [Formula: see text] accuracy in recognizing scalp IED from non-IED segments with [Formula: see text] accuracy when trained and tested on the same subject. Also, it was able to identify nonscalp-visible IED events for most patients with a low number of false positive detections. Our results represent a proof of concept that IED information for TLE patients is contained in scalp EEG even if they are not visually identifiable and also that between subject differences in the IED topology and shape are small enough such that a generic algorithm can be used.
Sood, Mehak; Besson, Pierre; Muthalib, Makii; Jindal, Utkarsh; Perrey, Stephane; Dutta, Anirban; Hayashibe, Mitsuhiro
2016-12-01
Transcranial direct current stimulation (tDCS) has been shown to perturb both cortical neural activity and hemodynamics during (online) and after the stimulation, however mechanisms of these tDCS-induced online and after-effects are not known. Here, online resting-state spontaneous brain activation may be relevant to monitor tDCS neuromodulatory effects that can be measured using electroencephalography (EEG) in conjunction with near-infrared spectroscopy (NIRS). We present a Kalman Filter based online parameter estimation of an autoregressive (ARX) model to track the transient coupling relation between the changes in EEG power spectrum and NIRS signals during anodal tDCS (2mA, 10min) using a 4×1 ring high-definition montage. Our online ARX parameter estimation technique using the cross-correlation between log (base-10) transformed EEG band-power (0.5-11.25Hz) and NIRS oxy-hemoglobin signal in the low frequency (≤0.1Hz) range was shown in 5 healthy subjects to be sensitive to detect transient EEG-NIRS coupling changes in resting-state spontaneous brain activation during anodal tDCS. Conventional sliding window cross-correlation calculations suffer a fundamental problem in computing the phase relationship as the signal in the window is considered time-invariant and the choice of the window length and step size are subjective. Here, Kalman Filter based method allowed online ARX parameter estimation using time-varying signals that could capture transients in the coupling relationship between EEG and NIRS signals. Our new online ARX model based tracking method allows continuous assessment of the transient coupling between the electrophysiological (EEG) and the hemodynamic (NIRS) signals representing resting-state spontaneous brain activation during anodal tDCS. Published by Elsevier B.V.
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Detection of artifacts from high energy bursts in neonatal EEG.
Bhattacharyya, Sourya; Biswas, Arunava; Mukherjee, Jayanta; Majumdar, Arun Kumar; Majumdar, Bandana; Mukherjee, Suchandra; Singh, Arun Kumar
2013-11-01
Detection of non-cerebral activities or artifacts, intermixed within the background EEG, is essential to discard them from subsequent pattern analysis. The problem is much harder in neonatal EEG, where the background EEG contains spikes, waves, and rapid fluctuations in amplitude and frequency. Existing artifact detection methods are mostly limited to detect only a subset of artifacts such as ocular, muscle or power line artifacts. Few methods integrate different modules, each for detection of one specific category of artifact. Furthermore, most of the reference approaches are implemented and tested on adult EEG recordings. Direct application of those methods on neonatal EEG causes performance deterioration, due to greater pattern variation and inherent complexity. A method for detection of a wide range of artifact categories in neonatal EEG is thus required. At the same time, the method should be specific enough to preserve the background EEG information. The current study describes a feature based classification approach to detect both repetitive (generated from ECG, EMG, pulse, respiration, etc.) and transient (generated from eye blinking, eye movement, patient movement, etc.) artifacts. It focuses on artifact detection within high energy burst patterns, instead of detecting artifacts within the complete background EEG with wide pattern variation. The objective is to find true burst patterns, which can later be used to identify the Burst-Suppression (BS) pattern, which is commonly observed during newborn seizure. Such selective artifact detection is proven to be more sensitive to artifacts and specific to bursts, compared to the existing artifact detection approaches applied on the complete background EEG. Several time domain, frequency domain, statistical features, and features generated by wavelet decomposition are analyzed to model the proposed bi-classification between burst and artifact segments. A feature selection method is also applied to select the feature subset producing highest classification accuracy. The suggested feature based classification method is executed using our recorded neonatal EEG dataset, consisting of burst and artifact segments. We obtain 78% sensitivity and 72% specificity as the accuracy measures. The accuracy obtained using the proposed method is found to be about 20% higher than that of the reference approaches. Joint use of the proposed method with our previous work on burst detection outperforms reference methods on simultaneous burst and artifact detection. As the proposed method supports detection of a wide range of artifact patterns, it can be improved to incorporate the detection of artifacts within other seizure patterns and background EEG information as well. © 2013 Elsevier Ltd. All rights reserved.
Unfolding dimension and the search for functional markers in the human electroencephalogram
NASA Astrophysics Data System (ADS)
Dünki, Rudolf M.; Schmid, Gary Bruno
1998-02-01
A biparametric approach to dimensional analysis in terms of a so-called ``unfolding dimension'' is introduced to explore the extent to which the human EEG can be described by stable features characteristic of an individual despite the well-known problems of intraindividual variability. Our analysis comprises an EEG data set recorded from healthy individuals over a time span of 5 years. The outcome is shown to be comparable to advanced linear methods of spectral analysis with regard to intraindividual specificity and stability over time. Such linear methods have not yet proven to be specific to the EEG of different brain states. Thus we have also investigated the specificity of our biparametric approach by comparing the mental states schizophrenic psychosis and remission, i.e., illness versus full recovery. A difference between EEG in psychosis and remission became apparent within recordings taken at rest with eyes closed and no stimulated or requested mental activity. Hence our approach distinguishes these functional brain states even in the absence of an active or intentional stimulus. This sheds a different light upon theories of schizophrenia as an information-processing disturbance of the brain.
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
Are resting state spectral power measures related to executive functions in healthy young adults?
Gordon, Shirley; Todder, Doron; Deutsch, Inbal; Garbi, Dror; Getter, Nir; Meiran, Nachshon
2018-01-08
Resting-state electroencephalogram (rsEEG) has been found to be associated with psychopathology, intelligence, problem solving, academic performance and is sometimes used as a supportive physiological indicator of enhancement in cognitive training interventions (e.g. neurofeedback, working memory training). In the current study, we measured rsEEG spectral power measures (relative power, between-band ratios and asymmetry) in one hundred sixty five young adults who were also tested on a battery of executive function (EF). We specifically focused on upper Alpha, Theta and Beta frequency bands given their putative role in EF. Our indices enabled finding correlations since they had decent-to-excellent internal and retest reliability and very little range restriction relative to a nation-wide representative large sample. Nonetheless, Bayesian statistical inference indicated support for the null hypothesis concerning lack of monotonic correlation between EF and rsEEG spectral power measures. Therefore, we conclude that, contrary to the quite common interpretation, these rsEEG spectral power measures do not indicate individual differences in the measured EF abilities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Online EEG artifact removal for BCI applications by adaptive spatial filtering.
Guarnieri, Roberto; Marino, Marco; Barban, Federico; Ganzetti, Marco; Mantini, Dante
2018-06-28
The performance of brain computer interfaces (BCIs) based on electroencephalography (EEG) data strongly depends on the effective attenuation of artifacts that are mixed in the recordings. To address this problem, we have developed a novel online EEG artifact removal method for BCI applications, which combines blind source separation (BSS) and regression (REG) analysis. The BSS-REG method relies on the availability of a calibration dataset of limited duration for the initialization of a spatial filter using BSS. Online artifact removal is implemented by dynamically adjusting the spatial filter in the actual experiment, based on a linear regression technique. Our results showed that the BSS-REG method is capable of attenuating different kinds of artifacts, including ocular and muscular, while preserving true neural activity. Thanks to its low computational requirements, BSS-REG can be applied to low-density as well as high-density EEG data. We argue that BSS-REG may enable the development of novel BCI applications requiring high-density recordings, such as source-based neurofeedback and closed-loop neuromodulation. © 2018 IOP Publishing Ltd.
Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features.
Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate
2017-08-01
Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features
NASA Astrophysics Data System (ADS)
Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate
2017-08-01
Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.
Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.
Wang, Yubo; Veluvolu, Kalyana C
2017-01-01
The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.
Pakozdy, Akos; Glantschnigg, Ursula; Leschnik, Michael; Hechinger, Harald; Moloney, Teresa; Lang, Bethan; Halasz, Peter; Vincent, Angela
2014-03-01
A 5-year-old, female client-owned cat presented with acute onset of focal epileptic seizures with orofacial twitching and behavioural changes. Magnetic resonance imaging showed bilateral temporal lobe hyperintensities and the EEG was consistent with ictal epileptic seizure activity. After antiepileptic and additional corticosteroid treatment, the cat recovered and by 10 months of follow-up was seizure-free without any problem. Retrospectively, antibodies to LGI1, a component of the voltage-gated potassium channel-complex, were identified. Feline focal seizures with orofacial involvement have been increasingly recognised in client-owned cats, and autoimmune limbic encephalitis was recently suggested as a possible aetiology. This is the first report of EEG, MRI and long-term follow-up of this condition in cats which is similar to human limbic encephalitis.
FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems
NASA Astrophysics Data System (ADS)
Vourc'h, Eric; Rodet, Thomas
2015-11-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2015 was a one-day workshop held in May 2015 which attracted around 70 attendees. Each of the submitted papers has been reviewed by two reviewers. There have been 15 accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks: GDR ISIS, GDR MIA, GDR MOA and GDR Ondes. The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA and SATIE.
NASA Astrophysics Data System (ADS)
Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram
2018-02-01
Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.
How do reference montage and electrodes setup affect the measured scalp EEG potentials?
NASA Astrophysics Data System (ADS)
Hu, Shiang; Lai, Yongxiu; Valdes-Sosa, Pedro A.; Bringas-Vega, Maria L.; Yao, Dezhong
2018-04-01
Objective. Human scalp electroencephalogram (EEG) is widely applied in cognitive neuroscience and clinical studies due to its non-invasiveness and ultra-high time resolution. However, the representativeness of the measured EEG potentials for the underneath neural activities is still a problem under debate. This study aims to investigate systematically how both reference montage and electrodes setup affect the accuracy of EEG potentials. Approach. First, the standard EEG potentials are generated by the forward calculation with a single dipole in the neural source space, for eleven channel numbers (10, 16, 21, 32, 64, 85, 96, 128, 129, 257, 335). Here, the reference is the ideal infinity implicitly determined by forward theory. Then, the standard EEG potentials are transformed to recordings with different references including five mono-polar references (Left earlobe, Fz, Pz, Oz, Cz), and three re-references (linked mastoids (LM), average reference (AR) and reference electrode standardization technique (REST)). Finally, the relative errors between the standard EEG potentials and the transformed ones are evaluated in terms of channel number, scalp regions, electrodes layout, dipole source position and orientation, as well as sensor noise and head model. Main results. Mono-polar reference recordings are usually of large distortions; thus, a re-reference after online mono-polar recording should be adopted in general to mitigate this effect. Among the three re-references, REST is generally superior to AR for all factors compared, and LM performs worst. REST is insensitive to head model perturbation. AR is subject to electrodes coverage and dipole orientation but no close relation with channel number. Significance. These results indicate that REST would be the first choice of re-reference and AR may be an alternative option for high level sensor noise case. Our findings may provide the helpful suggestions on how to obtain the EEG potentials as accurately as possible for cognitive neuroscientists and clinicians.
Kenyon, Lisa K; Farris, John P; Aldrich, Naomi J; Rhodes, Samhita
2017-08-30
The purposes of this exploratory project were: (1) to evaluate the impact of power mobility training with a child who has multiple, severe impairments and (2) to determine if the child's spectrum of electroencephalography (EEG) activity changed during power mobility training. A single-subject A-B-A-B research design was conducted with a four-week duration for each phase. Two target behaviours were explored: (1) mastery motivation assessed via the dimensions of mastery questionnaire (DMQ) and (2) EEG data collected under various conditions. Power mobility skills were also assessed. The participant was a three-year, two-month-old girl with spastic quadriplegic cerebral palsy, gross motor function classification system level V. Each target behaviour was measured weekly. During intervention phases, power mobility training was provided. Improvements were noted in subscale scores of the DMQ. Short-term and long-term EEG changes were also noted. Improvements were noted in power mobility skills. The participant in this exploratory project demonstrated improvements in power mobility skill and function. EEG data collection procedures and variability in an individual's EEG activity make it difficult to determine if the participant's spectrum of EEG activity actually changed in response to power mobility training. Additional studies are needed to investigate the impact of power mobility training on the spectrum of EEG activity in children who have multiple, severe impairments. Implications for Rehabilitation Power mobility training appeared to be beneficial for a child with multiple, severe impairments though the child may never become an independent, community-based power wheelchair user. Electroencephalography may be a valuable addition to the study of power mobility use in children with multiple, severe impairments. Power mobility training appeared to impact mastery motivation (the internal drive to solve complex problems and master new skills) in a child who has multiple, severe impairments.
Coll-Font, Jaume; Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrel J; Wang, Dafang; Brooks, Dana H; van Dam, Peter; Macleod, Rob S
2014-09-01
Cardiac electrical imaging often requires the examination of different forward and inverse problem formulations based on mathematical and numerical approximations of the underlying source and the intervening volume conductor that can generate the associated voltages on the surface of the body. If the goal is to recover the source on the heart from body surface potentials, the solution strategy must include numerical techniques that can incorporate appropriate constraints and recover useful solutions, even though the problem is badly posed. Creating complete software solutions to such problems is a daunting undertaking. In order to make such tools more accessible to a broad array of researchers, the Center for Integrative Biomedical Computing (CIBC) has made an ECG forward/inverse toolkit available within the open source SCIRun system. Here we report on three new methods added to the inverse suite of the toolkit. These new algorithms, namely a Total Variation method, a non-decreasing TMP inverse and a spline-based inverse, consist of two inverse methods that take advantage of the temporal structure of the heart potentials and one that leverages the spatial characteristics of the transmembrane potentials. These three methods further expand the possibilities of researchers in cardiology to explore and compare solutions to their particular imaging problem.
Children's Understanding of the Inverse Relation between Multiplication and Division
ERIC Educational Resources Information Center
Robinson, Katherine M.; Dube, Adam K.
2009-01-01
Children's understanding of the inversion concept in multiplication and division problems (i.e., that on problems of the form "d multiplied by e/e" no calculations are required) was investigated. Children in Grades 6, 7, and 8 completed an inversion problem-solving task, an assessment of procedures task, and a factual knowledge task of simple…
A Volunteer Computing Project for Solving Geoacoustic Inversion Problems
NASA Astrophysics Data System (ADS)
Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya
2017-12-01
A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less
NASA Astrophysics Data System (ADS)
Guseinov, I. M.; Khanmamedov, A. Kh.; Mamedova, A. F.
2018-04-01
We consider the Schrödinger equation with an additional quadratic potential on the entire axis and use the transformation operator method to study the direct and inverse problems of the scattering theory. We obtain the main integral equations of the inverse problem and prove that the basic equations are uniquely solvable.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
Computational methods for inverse problems in geophysics: inversion of travel time observations
Pereyra, V.; Keller, H.B.; Lee, W.H.K.
1980-01-01
General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.
A fixed energy fixed angle inverse scattering in interior transmission problem
NASA Astrophysics Data System (ADS)
Chen, Lung-Hui
2017-06-01
We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
A new method to detect event-related potentials based on Pearson's correlation.
Giroldini, William; Pederzoli, Luciano; Bilucaglia, Marco; Melloni, Simone; Tressoldi, Patrizio
2016-12-01
Event-related potentials (ERPs) are widely used in brain-computer interface applications and in neuroscience. Normal EEG activity is rich in background noise, and therefore, in order to detect ERPs, it is usually necessary to take the average from multiple trials to reduce the effects of this noise. The noise produced by EEG activity itself is not correlated with the ERP waveform and so, by calculating the average, the noise is decreased by a factor inversely proportional to the square root of N , where N is the number of averaged epochs. This is the easiest strategy currently used to detect ERPs, which is based on calculating the average of all ERP's waveform, these waveforms being time- and phase-locked. In this paper, a new method called GW6 is proposed, which calculates the ERP using a mathematical method based only on Pearson's correlation. The result is a graph with the same time resolution as the classical ERP and which shows only positive peaks representing the increase-in consonance with the stimuli-in EEG signal correlation over all channels. This new method is also useful for selectively identifying and highlighting some hidden components of the ERP response that are not phase-locked, and that are usually hidden in the standard and simple method based on the averaging of all the epochs. These hidden components seem to be caused by variations (between each successive stimulus) of the ERP's inherent phase latency period (jitter), although the same stimulus across all EEG channels produces a reasonably constant phase. For this reason, this new method could be very helpful to investigate these hidden components of the ERP response and to develop applications for scientific and medical purposes. Moreover, this new method is more resistant to EEG artifacts than the standard calculations of the average and could be very useful in research and neurology. The method we are proposing can be directly used in the form of a process written in the well-known Matlab programming language and can be easily and quickly written in any other software language.
ERIC Educational Resources Information Center
He, Jie; Degnan, Kathryn Amey; McDermott, Jennifer Martin; Henderson, Heather A.; Hane, Amie Ashley; Xu, Qinmei; Fox, Nathan A.
2010-01-01
The relations among infant anger reactivity, approach behavior, and frontal electroencephalogram (EEG) asymmetry, and their relations to inhibitory control and behavior problems in early childhood were examined within the context of a longitudinal study of temperament. Two hundred nine infants' anger expressions to arm restraint were observed at 4…
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
NASA Astrophysics Data System (ADS)
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)
NASA Astrophysics Data System (ADS)
2014-10-01
This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods, convex optimization, free discontinuity problems, metamodels, proper orthogonal decomposition, reduced models for the inversion, non-linear inverse scattering, image reconstruction and restoration, and applications (bio-medical imaging, non-destructive evaluation...). NCMIP 2014 was a one-day workshop held in May 2014 which attracted around sixty attendees. Each of the submitted papers has been reviewed by two reviewers. There have been nine accepted papers. In addition, three international speakers were invited to present a longer talk. The workshop was supported by Institut Farman (ENS Cachan, CNRS) and endorsed by the following French research networks (GDR ISIS, GDR MIA, GDR MOA, GDR Ondes). The program committee acknowledges the following research laboratories: CMLA, LMT, LURPA, SATIE. Eric Vourc'h and Thomas Rodet
Lawhern, Vernon; Hairston, W David; McDowell, Kaleb; Westerfield, Marissa; Robbins, Kay
2012-07-15
We examine the problem of accurate detection and classification of artifacts in continuous EEG recordings. Manual identification of artifacts, by means of an expert or panel of experts, can be tedious, time-consuming and infeasible for large datasets. We use autoregressive (AR) models for feature extraction and characterization of EEG signals containing several kinds of subject-generated artifacts. AR model parameters are scale-invariant features that can be used to develop models of artifacts across a population. We use a support vector machine (SVM) classifier to discriminate among artifact conditions using the AR model parameters as features. Results indicate reliable classification among several different artifact conditions across subjects (approximately 94%). These results suggest that AR modeling can be a useful tool for discriminating among artifact signals both within and across individuals. Copyright © 2012 Elsevier B.V. All rights reserved.
Effects of Parkinson's disease on brain-wave phase synchronisation and cross-modulation
NASA Astrophysics Data System (ADS)
Stumpf, K.; Schumann, A. Y.; Plotnik, M.; Gans, F.; Penzel, T.; Fietze, I.; Hausdorff, J. M.; Kantelhardt, J. W.
2010-02-01
We study the effects of Parkinson's disease (PD) on phase synchronisation and cross-modulation of instantaneous amplitudes and frequencies for brain waves during sleep. Analysing data from 40 full-night EEGs (electro-encephalograms) of ten patients with PD and ten age-matched healthy controls we find that phase synchronisation between the left and right hemisphere of the brain is characteristically reduced in patients with PD. Since there is no such difference in phase synchronisation for EEGs from the same hemisphere, our results suggest the possibility of a relation with problems in coordinated motion of left and right limbs in some patients with PD. Using the novel technique of amplitude and frequency cross-modulation analysis, relating oscillations in different EEG bands and distinguishing both positive and negative modulation, we observe an even more significant decrease in patients for several band combinations.
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Hybrid EEG-EOG brain-computer interface system for practical machine control.
Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid
2010-01-01
Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.
Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.
PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG
Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay
2016-01-01
Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397
Orbitofrontal disinhibition of pain in migraine with aura: an interictal EEG-mapping study.
Lev, Rina; Granovsky, Yelena; Yarnitsky, David
2010-08-01
This study aimed to identify the cortical mechanisms underlying the processes of interictal dishabituation to experimental pain in subjects suffering from migraine with aura (MWA). In 21 subjects with MWA and 22 healthy controls, cortical responses to two successive trials of noxious contact-heat stimuli were analyzed using EEG-tomography software. When compared with controls, MWA patients showed significantly increased pain-evoked potential amplitudes accompanied by reduced activity in the orbitofrontal cortex (OFC) and increased activity in the pain matrix regions, including the primary somatosensory cortex (SI) (p < .05). Similarly to controls, MWA subjects displayed an inverse correlation between the OFC and SI activities, and positive interrelations between other pain-specific regions. The activity changes in the OFC negatively correlated with lifetime headache duration and longevity (p < .05). Reduced inhibitory functioning of the prefrontal cortex is a possible cause for disinhibition of the pain-related sensory cortices in migraine. The finding of OFC hypofunction over the disease course is in keeping with current concepts of migraine as a progressive brain disorder.
LORETA EEG phase reset of the default mode network.
Thatcher, Robert W; North, Duane M; Biver, Carl J
2014-01-01
The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300-350 ms and (2) 350-450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a "shutter" that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations.
Event-related wave activity in the EEG provides new marker of ADHD.
Alexander, David M; Hermens, Daniel F; Keage, Hannah A D; Clark, C Richard; Williams, Leanne M; Kohn, Michael R; Clarke, Simon D; Lamb, Chris; Gordon, Evian
2008-01-01
This study examines the utility of new measures of event-related spatio-temporal waves in the EEG as a marker of ADHD, previously shown to be closely related to the P3 ERP in an adult sample. Wave activity in the EEG was assessed during both an auditory Oddball and a visual continuous performance task (CPT) for an ADHD group ranging in age from 6 to 18 years and comprising mostly Combined and Inattentive subtypes, and for an age and gender matched control group. The ADHD subjects had less wave activity at low frequencies ( approximately 1 Hz) during both tasks. For auditory Oddball targets, this effect was shown to be related to smaller P3 ERP amplitudes. During CPT, the approximately 1 Hz wave activity in the ADHD subjects was inversely related to clinical and behavioral measures of hyperactivity and impulsivity. CPT wave activity at approximately 1 Hz was seen to "normalise" following treatment with stimulant medication. The results identify a deficit in low frequency wave activity as a new marker for ADHD associated with levels of hyperactivity and impulsivity. The marker is evident across a range of tasks and may be specific to ADHD. While lower approximately 1 Hz activity partly accounts for reduced P3 ERPs in ADHD, the effect also arises for tasks that do not elicit a P3. Deficits in behavioral inhibition are hypothesized to arise from underlying dysregulation of cortical inhibition.
Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method
Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...
2017-11-20
The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Soft, comfortable polymer dry electrodes for high quality ECG and EEG recording.
Chen, Yun-Hsuan; Op de Beeck, Maaike; Vanderheyden, Luc; Carrette, Evelien; Mihajlović, Vojkan; Vanstreels, Kris; Grundlehner, Bernard; Gadeyne, Stefanie; Boon, Paul; Van Hoof, Chris
2014-12-10
Conventional gel electrodes are widely used for biopotential measurements, despite important drawbacks such as skin irritation, long set-up time and uncomfortable removal. Recently introduced dry electrodes with rigid metal pins overcome most of these problems; however, their rigidity causes discomfort and pain. This paper presents dry electrodes offering high user comfort, since they are fabricated from EPDM rubber containing various additives for optimum conductivity, flexibility and ease of fabrication. The electrode impedance is measured on phantoms and human skin. After optimization of the polymer composition, the skin-electrode impedance is only ~10 times larger than that of gel electrodes. Therefore, these electrodes are directly capable of recording strong biopotential signals such as ECG while for low-amplitude signals such as EEG, the electrodes need to be coupled with an active circuit. EEG recordings using active polymer electrodes connected to a clinical EEG system show very promising results: alpha waves can be clearly observed when subjects close their eyes, and correlation and coherence analyses reveal high similarity between dry and gel electrode signals. Moreover, all subjects reported that our polymer electrodes did not cause discomfort. Hence, the polymer-based dry electrodes are promising alternatives to either rigid dry electrodes or conventional gel electrodes.
Assessing effect of meditation on cognitive workload using EEG signals
NASA Astrophysics Data System (ADS)
Jadhav, Narendra; Manthalkar, Ramchandra; Joshi, Yashwant
2017-06-01
Recent research suggests that meditation affects the structure and function of the brain. Cognitive load can be handled in effective way by the meditators. EEG signals are used to quantify cognitive load. The research of investigating effect of meditation on cognitive workload using EEG signals in pre and post-meditation is an open problem. The subjects for this study are young healthy 11 engineering students from our institute. The focused attention meditation practice is used for this study. EEG signals are recorded at the beginning of meditation and after four weeks of regular meditation using EMOTIV device. The subjects practiced meditation daily 20 minutes for 4 weeks. The 7 level arithmetic additions of single digit (low level) to three digits with carry (high level) are presented as cognitive load. The cognitive load indices such as arousal index, performance enhancement, neural activity, load index, engagement, and alertness are evaluated in pre and post meditation. The cognitive indices are improved in post meditation data. Power Spectral Density (PSD) feature is compared between pre and post-meditation across all subjects. The result hints that the subjects were handling cognitive load without stress (ease of cognitive functioning increased for the same load) after 4 weeks of meditation.
Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail
2016-01-01
In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.
Frnakenstein: multiple target inverse RNA folding.
Lyngsø, Rune B; Anderson, James W J; Sizikova, Elena; Badugu, Amarendra; Hyland, Tomas; Hein, Jotun
2012-10-09
RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein.
Frnakenstein: multiple target inverse RNA folding
2012-01-01
Background RNA secondary structure prediction, or folding, is a classic problem in bioinformatics: given a sequence of nucleotides, the aim is to predict the base pairs formed in its three dimensional conformation. The inverse problem of designing a sequence folding into a particular target structure has only more recently received notable interest. With a growing appreciation and understanding of the functional and structural properties of RNA motifs, and a growing interest in utilising biomolecules in nano-scale designs, the interest in the inverse RNA folding problem is bound to increase. However, whereas the RNA folding problem from an algorithmic viewpoint has an elegant and efficient solution, the inverse RNA folding problem appears to be hard. Results In this paper we present a genetic algorithm approach to solve the inverse folding problem. The main aims of the development was to address the hitherto mostly ignored extension of solving the inverse folding problem, the multi-target inverse folding problem, while simultaneously designing a method with superior performance when measured on the quality of designed sequences. The genetic algorithm has been implemented as a Python program called Frnakenstein. It was benchmarked against four existing methods and several data sets totalling 769 real and predicted single structure targets, and on 292 two structure targets. It performed as well as or better at finding sequences which folded in silico into the target structure than all existing methods, without the heavy bias towards CG base pairs that was observed for all other top performing methods. On the two structure targets it also performed well, generating a perfect design for about 80% of the targets. Conclusions Our method illustrates that successful designs for the inverse RNA folding problem does not necessarily have to rely on heavy biases in base pair and unpaired base distributions. The design problem seems to become more difficult on larger structures when the target structures are real structures, while no deterioration was observed for predicted structures. Design for two structure targets is considerably more difficult, but far from impossible, demonstrating the feasibility of automated design of artificial riboswitches. The Python implementation is available at http://www.stats.ox.ac.uk/research/genome/software/frnakenstein. PMID:23043260
NASA Astrophysics Data System (ADS)
Rundell, William; Somersalo, Erkki
2008-07-01
The Inverse Problems International Association (IPIA) awarded the first Calderón Prize to Matti Lassas for his outstanding contributions to the field of inverse problems, especially in geometric inverse problems. The Calderón Prize is given to a researcher under the age of 40 who has made distinguished contributions to the field of inverse problems broadly defined. The first Calderón Prize Committee consisted of Professors Adrian Nachman, Lassi Päivärinta, William Rundell (chair), and Michael Vogelius. William Rundell For the Calderón Prize Committee Prize ceremony The ceremony awarding the Calderón Prize. Matti Lassas is on the left. He and William Rundell are on the right. Photos by P Stefanov. Brief Biography of Matti Lassas Matti Lassas was born in 1969 in Helsinki, Finland, and studied at the University of Helsinki. He finished his Master's studies in 1992 in three years and earned his PhD in 1996. His PhD thesis, written under the supervision of Professor Erkki Somersalo was entitled `Non-selfadjoint inverse spectral problems and their applications to random bodies'. Already in his thesis, Matti demonstrated a remarkable command of different fields of mathematics, bringing together the spectral theory of operators, geometry of Riemannian surfaces, Maxwell's equations and stochastic analysis. He has continued to develop all of these branches in the framework of inverse problems, the most remarkable results perhaps being in the field of differential geometry and inverse problems. Matti has always been a very generous researcher, sharing his ideas with his numerous collaborators. He has authored over sixty scientific articles, among which a monograph on inverse boundary spectral problems with Alexander Kachalov and Yaroslav Kurylev and over forty articles in peer reviewed journals of the highest standards. To get an idea of the wide range of Matti's interests, it is enough to say that he also has three US patents on medical imaging applications. Matti is currently professor of mathematics at Helsinki University of Technology, where he has created his own line of research with young talented researchers around him. He is a central person in the Centre of Excellence in Inverse Problems Research of the Academy of Finland. Previously, Matti Lassas has won several awards in his home country, including the prestigious Vaisala price of the Finnish Academy of Science and Letters in 2004. He is a highly esteemed colleague, teacher and friend, and the Great Diving Beetle of the Finnish Inverse Problems Society (http://venda.uku.fi/research/FIPS/), an honorary title for a person who has no fear of the deep. Erkki Somersalo
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Neurocognitive Pattern Analysis.
1983-08-01
Brazier and Casby, 1952; Callaway and Harris, 1974; Busk and Galbraith, 1975; Livanov, 1977), but this hypothesis remains unproven due to problems of...of electroencephalographic potentials. Electroercephaloqrph & Clinical Neurophysiolog , 1952, 4P 201-211* Busk , J, and Galbraith, G EEG correlates of
PREFACE: Inverse Problems in Applied Sciences—towards breakthrough
NASA Astrophysics Data System (ADS)
Cheng, Jin; Iso, Yuusuke; Nakamura, Gen; Yamamoto, Masahiro
2007-06-01
These are the proceedings of the international conference `Inverse Problems in Applied Sciences—towards breakthrough' which was held at Hokkaido University, Sapporo, Japan on 3-7 July 2006 (http://coe.math.sci.hokudai.ac.jp/sympo/inverse/). There were 88 presentations and more than 100 participants, and we are proud to say that the conference was very successful. Nowadays, many new activities on inverse problems are flourishing at many centers of research around the world, and the conference has successfully gathered a world-wide variety of researchers. We believe that this volume contains not only main papers, but also conveys the general status of current research into inverse problems. This conference was the third biennial international conference on inverse problems, the core of which is the Pan-Pacific Asian area. The purpose of this series of conferences is to establish and develop constant international collaboration, especially among the Pan-Pacific Asian countries, and to lead the organization of activities concerning inverse problems centered in East Asia. The first conference was held at City University of Hong Kong in January 2002 and the second was held at Fudan University in June 2004. Following the preceding two successes, the third conference was organized in order to extend the scope of activities and build useful bridges to the next conference in Seoul in 2008. Therefore this third biennial conference was intended not only to establish collaboration and links between researchers in Asia and leading researchers worldwide in inverse problems but also to nurture interdisciplinary collaboration in theoretical fields such as mathematics, applied fields and evolving aspects of inverse problems. For these purposes, we organized tutorial lectures, serial lectures and a panel discussion as well as conference research presentations. This volume contains three lecture notes from the tutorial and serial lectures, and 22 papers. Especially at this flourishing time, it is necessary to carefully analyse the current status of inverse problems for further development. Thus we have opened with the panel discussion entitled `Future of Inverse Problems' with panelists: Professors J Cheng, H W Engl, V Isakov, R Kress, J-K Seo, G Uhlmann and the commentator: Elaine Longden-Chapman from IOP Publishing. The aims of the panel discussion were to examine the current research status from various viewpoints, to discuss how we can overcome any difficulties and how we can promote young researchers and open new possibilities for inverse problems such as industrial linkages. As one output, the panel discussion has triggered the organization of the Inverse Problems International Association (IPIA) which has led to its first international congress in the summer of 2007. Another remarkable outcome of the conference is, of course, the present volume: this is the very high quality online proceedings volume of Journal of Physics: Conference Series. Readers can see in these proceedings very well written tutorial lecture notes, and very high quality original research and review papers all of which show what was achieved by the time the conference was held. The electronic publication of the proceedings is a new way of publicizing the achievement of the conference. It has the advantage of wide circulation and cost reduction. We believe this is a most efficient method for our needs and purposes. We would like to take this opportunity to acknowledge all the people who helped to organize the conference. Guest Editors Jin Cheng, Fudan University, Shanghai, China Yuusuke Iso, Kyoto University, Kyoto, Japan Gen Nakamura, Hokkaido University, Sapporo, Japan Masahiro Yamamoto, University of Tokyo, Tokyo, Japan
Measures and Models for Estimating and Predicting Cognitive Fatigue
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Kochavi, Rebekah; Kubitz, Karla; Montgomery, Leslie D.; Rosipal, Roman; Matthews, Bryan
2004-01-01
We analyzed EEG and ERPs in a fatiguing mental task and created statistical models for single subjects. Seventeen subjects (4 F, 18-38 y) viewed 4-digit problems (e.g., 3+5-2+7=15) on a computer, solved the problems, and pressed keys to respond (intertrial interval = 1 s). Subjects performed until either they felt exhausted or three hours had elapsed. Re- and post-task measures of mood (Activation Deactivation Adjective Checklist, Visual Analogue Mood Scale) confirmed that fatigue increased and energy decreased over time. We tested response times (RT); amplitudes of ERP components N1, P2, P300, readiness potentials; and amplitudes of frontal theta and parietal alpha rhythms for change as a function of time. For subjects who completed 3 h (n=9) we analyzed 12 15-min blocks. For subjects who completed at least 1.5 h (n=17), we analyzed the first-, middle-, and last 100 error-free trials. Mean RT rose from 6.7 s to 8.5 s over time. We found no changes in the amplitudes of ERP components. In both analyses, amplitudes of frontal theta and parietal alpha rose by 30% or more over time. We used 30-channel EEG frequency spectra to model the effects of time in single subjects using a kernel partial least squares classifier. We classified 3.5s EEG segments as being from the first 100 or the last 100 trials, using random sub-samples of each class. Test set accuracies ranged from 63.9% to 99.6% correct. Only 2 of 17 subjects had mean accuracies lower than 80%. The results suggest that EEG accurately classifies periods of cognitive fatigue in 90% of subjects.
A review of classification algorithms for EEG-based brain–computer interfaces: a 10 year update
NASA Astrophysics Data System (ADS)
Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F.
2018-06-01
Objective. Most current electroencephalography (EEG)-based brain–computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. Approach. We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. Main results. We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. Significance. This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.
Sharma, Manish; Goyal, Deepanshu; Achuth, P V; Acharya, U Rajendra
2018-07-01
Sleep related disorder causes diminished quality of lives in human beings. Sleep scoring or sleep staging is the process of classifying various sleep stages which helps to detect the quality of sleep. The identification of sleep-stages using electroencephalogram (EEG) signals is an arduous task. Just by looking at an EEG signal, one cannot determine the sleep stages precisely. Sleep specialists may make errors in identifying sleep stages by visual inspection. To mitigate the erroneous identification and to reduce the burden on doctors, a computer-aided EEG based system can be deployed in the hospitals, which can help identify the sleep stages, correctly. Several automated systems based on the analysis of polysomnographic (PSG) signals have been proposed. A few sleep stage scoring systems using EEG signals have also been proposed. But, still there is a need for a robust and accurate portable system developed using huge dataset. In this study, we have developed a new single-channel EEG based sleep-stages identification system using a novel set of wavelet-based features extracted from a large EEG dataset. We employed a novel three-band time-frequency localized (TBTFL) wavelet filter bank (FB). The EEG signals are decomposed using three-level wavelet decomposition, yielding seven sub-bands (SBs). This is followed by the computation of discriminating features namely, log-energy (LE), signal-fractal-dimensions (SFD), and signal-sample-entropy (SSE) from all seven SBs. The extracted features are ranked and fed to the support vector machine (SVM) and other supervised learning classifiers. In this study, we have considered five different classification problems (CPs), (two-class (CP-1), three-class (CP-2), four-class (CP-3), five-class (CP-4) and six-class (CP-5)). The proposed system yielded accuracies of 98.3%, 93.9%, 92.1%, 91.7%, and 91.5% for CP-1 to CP-5, respectively, using 10-fold cross validation (CV) technique. Copyright © 2018 Elsevier Ltd. All rights reserved.
Solvability of the electrocardiology inverse problem for a moving dipole.
Tolkachev, V; Bershadsky, B; Nemirko, A
1993-01-01
New formulations of the direct and inverse problems for the moving dipole are offered. It has been suggested to limit the study by a small area on the chest surface. This lowers the role of the medium inhomogeneity. When formulating the direct problem, irregular components are considered. The algorithm of simultaneous determination of the dipole and regular noise parameters has been described and analytically investigated. It is shown that temporal overdetermination of the equations offers a single solution of the inverse problem for the four leads.
Saletu, Bernd; Anderer, Peter; Saletu-Zyhlarz, Gerda M
2006-04-01
By multi-lead computer-assisted quantitative analyses of human scalp-recorded electroencephalogram (QEEG) in combination with certain statistical procedures (quantitative pharmaco-EEG) and mapping techniques (pharmaco-EEG mapping or topography), it is possible to classify psychotropic substances and objectively evaluate their bioavailability at the target organ, the human brain. Specifically, one may determine at an early stage of drug development whether a drug is effective on the central nervous system (CNS) compared with placebo, what its clinical efficacy will be like, at which dosage it acts, when it acts and the equipotent dosages of different galenic formulations. Pharmaco-EEG maps of neuroleptics, antidepressants, tranquilizers, hypnotics, psychostimulants and nootropics/cognition-enhancing drugs will be described. Methodological problems, as well as the relationships between acute and chronic drug effects, alterations in normal subjects and patients, CNS effects and therapeutic efficacy will be discussed. Imaging of drug effects on the regional brain electrical activity of healthy subjects by means of EEG tomography such as low-resolution electromagnetic tomography (LORETA) has been used for identifying brain areas predominantly involved in psychopharmacological action. This will be shown for the representative drugs of the four main psychopharmacological classes, such as 3 mg haloperidol for neuroleptics, 20 mg citalopram for antidepressants, 2 mg lorazepam for tranquilizers and 20 mg methylphenidate for psychostimulants. LORETA demonstrates that these psychopharmacological classes affect brain structures differently. By considering these differences between psychotropic drugs and placebo in normal subjects, as well as between mental disorder patients and normal controls, it may be possible to choose the optimum drug for a specific patient according to a key-lock principle, since the drug should normalize the deviant brain function. Thus, pharmaco-EEG topography and tomography are valuable methods in human neuropsychopharmacology, clinical psychiatry and neurology.
Yang, Ping; Fan, Chenggui; Wang, Min; Li, Ling
2017-01-01
In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies. PMID:28529472
Malformations of cortical development and epilepsy: evaluation of 101 cases (part II).
Güngör, Serdal; Yalnizoğlu, Dilek; Turanli, Güzide; Saatçi, Işil; Erdoğan-Bakar, Emel; Topçu, Meral
2007-01-01
Malformations of cortical development (MCD) form a spectrum of lesions produced by insult to the developing neocortex. Clinical presentation and electrophysiologic findings of MCD are variable and depend on the affected cortical area. We evaluated epilepsy, EEG, and response to antiepileptic treatment in patients with MCD with respect to the neuroimaging findings. We studied 101 patients, ranging between 1 month and 19 years of age. Fifty-four patients were diagnosed with polymicrogyria (PMG), 23 patients with lissencephaly, 12 patients with schizencephaly, and 12 patients with heterotopia. With regards to epilepsy and seizure type, 72/101 (71.3%) patients had epilepsy, and 62/101 (61.4%) patients presented with seizures. Overall, 32.7% of patients had generalized seizures, and 25.7% had complex partial seizures. Mean age at the onset of seizures was 2.7 +/- 3.4 years. The onset of epilepsy tended to be younger in patients with lissencephaly and older in patients with heterotopias. Of the cases, 79.2% had abnormal EEG (56.3% with epileptiform abnormality, 22.9% with non-epileptiform abnormality). EEG was abnormal in 44.9% (13/29) of the cases without epilepsy. EEG showed bilateral synchronous and diffuse epileptiform discharges in 90% of patients with lissencephaly. Patients with schizencephaly had mostly focal epileptiform discharges. Heterotopia cases had a high rate of EEG abnormalities (72.7%). Patients with PMG had epileptiform abnormality in 59.5% of the cases. Patients with heterotopias and PMG achieved better seizure control in comparison with the other groups. In conclusion, epilepsy is the most common problem in MCD. Epilepsy and EEG findings of patients with MCD are variable and seem to be correlated with the extent of cortical involvement.
Yang, Ping; Fan, Chenggui; Wang, Min; Li, Ling
2017-01-01
In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies.
Clinical and electrographic features of sunflower syndrome.
Baumer, Fiona M; Porter, Brenda E
2018-05-01
Sunflower Syndrome describes reflex seizures - typically eyelid myoclonia with or without absence seizures - triggered when patients wave their hands in front of the sun. While valproate has been recognized as the best treatment for photosensitive epilepsy, many clinicians now initially treat with newer medications; the efficacy of these medications in Sunflower Syndrome has not been investigated. We reviewed all cases of Sunflower Syndrome seen at our institution over 15 years to describe the clinical course, electroencephalogram (EEG), and treatment response in these patients. Search of the electronic medical record and EEG database, as well as survey of epilepsy providers at our institution, yielded 13 cases of Sunflower Syndrome between 2002 and 2017. We reviewed the records and EEG tracings. Patients were mostly young females, with an average age of onset of 5.5 years. Seven had intellectual, attentional or academic problems. Self-induced seizures were predominantly eyelid myoclonia ± absences and 6 subjects also had spontaneous seizures. EEG demonstrated a normal background with 3-4 Hz spike waves ± polyspike waves as well as a photoparoxysmal response. Based on both clinical and EEG response, valproate was the most effective treatment for reducing or eliminating seizures and improving the EEG; 9 patients tried valproate and 66% had significant improvement or resolution of seizures. None of the nine patients on levetiracetam or seven patients on lamotrigine monotherapy achieved seizure control, though three patients had improvement with polypharmacy. Valproate monotherapy continues to be the most effective treatment for Sunflower Syndrome and should be considered early. For patients who cannot tolerate valproate, higher doses of lamotrigine or polypharmacy should be considered. Levetiracetam monotherapy, even at high doses, is unlikely to be effective. Copyright © 2018 Elsevier B.V. All rights reserved.
Epstein, Charles M; Adhikari, Bhim M; Gross, Robert; Willie, Jon; Dhamala, Mukesh
2014-12-01
In recent decades intracranial EEG (iEEG) recordings using increasing numbers of electrodes, higher sampling rates, and a variety of visual and quantitative analyses have indicated the presence of widespread, high frequency ictal and preictal oscillations (HFOs) associated with regions of seizure onset. Seizure freedom has been correlated with removal of brain regions generating pathologic HFOs. However, quantitative analysis of preictal HFOs has seldom been applied to the clinical problem of planning the surgical resection. We performed Granger causality (GC) analysis of iEEG recordings to analyze features of preictal seizure networks and to aid in surgical decision making. Ten retrospective and two prospective patients were chosen on the basis of individually stereotyped seizure patterns by visual criteria. Prospective patients were selected, additionally, for failure of those criteria to resolve apparent multilobar ictal onsets. iEEG was recorded at 500 or 1,000 Hz, using up to 128 surface and depth electrodes. Preictal and early ictal GC from individual electrodes was characterized by the strength of causal outflow, spatial distribution, and hierarchical causal relationships. In all patients we found significant, widespread preictal GC network activity at peak frequencies from 80 to 250 Hz, beginning 2-42 s before visible electrographic onset. In the two prospective patients, GC source/sink comparisons supported the exclusion of early ictal regions that were not the dominant causal sources, and contributed to planning of more limited surgical resections. Both patients have a class 1 outcome at 1 year. GC analysis of iEEG has the potential to increase understanding of preictal network activity, and to help improve surgical outcomes in cases of otherwise ambiguous iEEG onset. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
MAP Estimators for Piecewise Continuous Inversion
2016-08-08
MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
Hypovigilance Detection for UCAV Operators Based on a Hidden Markov Model
Kwon, Namyeon; Shin, Yongwook; Ryo, Chuh Yeop; Park, Jonghun
2014-01-01
With the advance of military technology, the number of unmanned combat aerial vehicles (UCAVs) has rapidly increased. However, it has been reported that the accident rate of UCAVs is much higher than that of manned combat aerial vehicles. One of the main reasons for the high accident rate of UCAVs is the hypovigilance problem which refers to the decrease in vigilance levels of UCAV operators while maneuvering. In this paper, we propose hypovigilance detection models for UCAV operators based on EEG signal to minimize the number of occurrences of hypovigilance. To enable detection, we have applied hidden Markov models (HMMs), two of which are used to indicate the operators' dual states, normal vigilance and hypovigilance, and, for each operator, the HMMs are trained as a detection model. To evaluate the efficacy and effectiveness of the proposed models, we conducted two experiments on the real-world data obtained by using EEG-signal acquisition devices, and they yielded satisfactory results. By utilizing the proposed detection models, the problem of hypovigilance of UCAV operators and the problem of high accident rate of UCAVs can be addressed. PMID:24963338
Acoustic Inversion in Optoacoustic Tomography: A Review
Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel
2013-01-01
Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060
Neurophysiological Studies of Auditory Verbal Hallucinations
Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko
2012-01-01
We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236
Liu, Aiming; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi
2017-01-01
Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems. PMID:29117100
Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi
2017-11-08
Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-01-01
Background: This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. Methods: We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. Results: The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). Conclusion: This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals. PMID:26622979
Evolutionary computing based approach for the removal of ECG artifact from the corrupted EEG signal.
Priyadharsini, S Suja; Rajan, S Edward
2014-01-01
Electroencephalogram (EEG) is an important tool for clinical diagnosis of brain-related disorders and problems. However, it is corrupted by various biological artifacts, of which ECG is one among them that reduces the clinical importance of EEG especially for epileptic patients and patients with short neck. To remove the ECG artifact from the measured EEG signal using an evolutionary computing approach based on the concept of Hybrid Adaptive Neuro-Fuzzy Inference System, which helps the Neurologists in the diagnosis and follow-up of encephalopathy. The proposed hybrid learning methods are ANFIS-MA and ANFIS-GA, which uses Memetic Algorithm (MA) and Genetic algorithm (GA) for tuning the antecedent and consequent part of the ANFIS structure individually. The performances of the proposed methods are compared with that of ANFIS and adaptive Recursive Least Squares (RLS) filtering algorithm. The proposed methods are experimentally validated by applying it to the simulated data sets, subjected to non-linearity condition and real polysomonograph data sets. Performance metrics such as sensitivity, specificity and accuracy of the proposed method ANFIS-MA, in terms of correction rate are found to be 93.8%, 100% and 99% respectively, which is better than current state-of-the-art approaches. The evaluation process used and demonstrated effectiveness of the proposed method proves that ANFIS-MA is more effective in suppressing ECG artifacts from the corrupted EEG signals than ANFIS-GA, ANFIS and RLS algorithm.
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-07-06
This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals.
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Combining Cryptography with EEG Biometrics
Kazanavičius, Egidijus; Woźniak, Marcin
2018-01-01
Cryptographic frameworks depend on key sharing for ensuring security of data. While the keys in cryptographic frameworks must be correctly reproducible and not unequivocally connected to the identity of a user, in biometric frameworks this is different. Joining cryptography techniques with biometrics can solve these issues. We present a biometric authentication method based on the discrete logarithm problem and Bose-Chaudhuri-Hocquenghem (BCH) codes, perform its security analysis, and demonstrate its security characteristics. We evaluate a biometric cryptosystem using our own dataset of electroencephalography (EEG) data collected from 42 subjects. The experimental results show that the described biometric user authentication system is effective, achieving an Equal Error Rate (ERR) of 0.024.
Combining Cryptography with EEG Biometrics.
Damaševičius, Robertas; Maskeliūnas, Rytis; Kazanavičius, Egidijus; Woźniak, Marcin
2018-01-01
Cryptographic frameworks depend on key sharing for ensuring security of data. While the keys in cryptographic frameworks must be correctly reproducible and not unequivocally connected to the identity of a user, in biometric frameworks this is different. Joining cryptography techniques with biometrics can solve these issues. We present a biometric authentication method based on the discrete logarithm problem and Bose-Chaudhuri-Hocquenghem (BCH) codes, perform its security analysis, and demonstrate its security characteristics. We evaluate a biometric cryptosystem using our own dataset of electroencephalography (EEG) data collected from 42 subjects. The experimental results show that the described biometric user authentication system is effective, achieving an Equal Error Rate (ERR) of 0.024.
Detection of Drug Effects on Brain Activity using EEG-P300 with Similar Stimuli
NASA Astrophysics Data System (ADS)
Turnip, Arjon; Dwi Esti, K.; Faizal Amri, M.; Simbolon, Artha I.; Agung Suhendra, M.; IsKandar, Shelly; Wirakusumah, Firman F.
2017-07-01
Drug addiction poses a serious problem to our species. The worry that our significant family might be involved in drug use and are concerned to know how to detect drug use. Examinations of thirty taped EEG recordings were performed. The subjects consist of three group: addictive, methadone treatment (rehabilitation), and control (normal) which 10 subjects for each group. Statistical analysis was performed for the relative frequency of wave bands. The higher average amplitude is obtained from the addiction subjects. In the comparison with the signals source, channels P3 provide slightly higher average amplitude than other channels for all of subjects.
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
Analytic semigroups: Applications to inverse problems for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rebnord, D. A.
1990-01-01
Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Gravity inversion of a fault by Particle swarm optimization (PSO).
Toushmalani, Reza
2013-01-01
Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.
The Inverse Problem in Jet Acoustics
NASA Technical Reports Server (NTRS)
Wooddruff, S. L.; Hussaini, M. Y.
2001-01-01
The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.
Mimickers of generalized spike and wave discharges.
Azzam, Raed; Bhatt, Amar B
2014-06-01
Overinterpretation of benign EEG variants is a common problem that can lead to the misdiagnosis of epilepsy. We review four normal patterns that mimic generalized spike and wave discharges: phantom spike-and-wave, hyperventilation hypersynchrony, hypnagogic/ hypnopompic hypersynchrony, and mitten patterns.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
EDITORIAL: Inverse Problems in Engineering
NASA Astrophysics Data System (ADS)
West, Robert M.; Lesnic, Daniel
2007-01-01
Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.
Inverse problem for multispecies ferromagneticlike mean-field models in phase space with many states
NASA Astrophysics Data System (ADS)
Fedele, Micaela; Vernia, Cecilia
2017-10-01
In this paper we solve the inverse problem for the Curie-Weiss model and its multispecies version when multiple thermodynamic states are present as in the low temperature phase where the phase space is clustered. The inverse problem consists of reconstructing the model parameters starting from configuration data generated according to the distribution of the model. We demonstrate that, without taking into account the presence of many states, the application of the inversion procedure produces very poor inference results. To overcome this problem, we use the clustering algorithm. When the system has two symmetric states of positive and negative magnetizations, the parameter reconstruction can also be obtained with smaller computational effort simply by flipping the sign of the magnetizations from positive to negative (or vice versa). The parameter reconstruction fails when the system undergoes a phase transition: In that case we give the correct inversion formulas for the Curie-Weiss model and we show that they can be used to measure how close the system gets to being critical.
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Advances in inverse design and optimization theory in engineering fields in China are presented. Two original approaches, the image-space approach and the variational approach, are discussed in terms of turbomachine aerodynamic inverse design. Other areas of research in turbomachine aerodynamic inverse design include the improved mean-streamline (stream surface) method and optimization theory based on optimal control. Among the additional engineering fields discussed are the following: the inverse problem of heat conduction, free-surface flow, variational cogeneration of optimal grid and flow field, and optimal meshing theory of gears.
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Name that tune: decoding music from the listening brain.
Schaefer, Rebecca S; Farquhar, Jason; Blokland, Yvonne; Sadakata, Makiko; Desain, Peter
2011-05-15
In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed. Copyright © 2010 Elsevier Inc. All rights reserved.
Regularized Filters for L1-Norm-Based Common Spatial Patterns.
Wang, Haixian; Li, Xiaomeng
2016-02-01
The l1 -norm-based common spatial patterns (CSP-L1) approach is a recently developed technique for optimizing spatial filters in the field of electroencephalogram (EEG)-based brain computer interfaces. The l1 -norm-based expression of dispersion in CSP-L1 alleviates the negative impact of outliers. In this paper, we further improve the robustness of CSP-L1 by taking into account noise which does not necessarily have as large a deviation as with outliers. The noise modelling is formulated by using the waveform length of the EEG time course. With the noise modelling, we then regularize the objective function of CSP-L1, in which the l1-norm is used in two folds: one is the dispersion and the other is the waveform length. An iterative algorithm is designed to resolve the optimization problem of the regularized objective function. A toy illustration and the experiments of classification on real EEG data sets show the effectiveness of the proposed method.
Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients
Bensch, Michael; Karim, Ahmed A.; Mellinger, Jürgen; Hinterberger, Thilo; Tangermann, Michael; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels
2007-01-01
We have previously demonstrated that an EEG-controlled web browser based on self-regulation of slow cortical potentials (SCPs) enables severely paralyzed patients to browse the internet independently of any voluntary muscle control. However, this system had several shortcomings, among them that patients could only browse within a limited number of web pages and had to select links from an alphabetical list, causing problems if the link names were identical or if they were unknown to the user (as in graphical links). Here we describe a new EEG-controlled web browser, called Nessi, which overcomes these shortcomings. In Nessi, the open source browser, Mozilla, was extended by graphical in-place markers, whereby different brain responses correspond to different frame colors placed around selectable items, enabling the user to select any link on a web page. Besides links, other interactive elements are accessible to the user, such as e-mail and virtual keyboards, opening up a wide range of hypertext-based applications. PMID:18350132
Computational structures for robotic computations
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chang, P. R.
1987-01-01
The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.
Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S
2012-01-01
Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301
Dushaw, Brian D; Sagen, Hanne
2017-12-01
Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.
Wen, Tingxi; Zhang, Zhongnan
2017-01-01
Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
Progress in EEG-Based Brain Robot Interaction Systems
Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe
2017-01-01
The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488
EEG to Primary Rewards: Predictive Utility and Malleability by Brain Stimulation
Prause, Nicole; Siegle, Greg J.; Deblieck, Choi; Wu, Allan; Iacoboni, Marco
2016-01-01
Theta burst stimulation (TBS) is thought to affect reward processing mechanisms, which may increase and decrease reward sensitivity. To test the ability of TBS to modulate response to strong primary rewards, participants hypersensitive to primary rewards were recruited. Twenty men and women with at least two opposite-sex, sexual partners in the last year received two forms of TBS. Stimulations were randomized to avoid order effects and separated by 2 hours to reduce carryover. The two TBS forms have been demonstrated to inhibit (continuous) or excite (intermittent) the left dorsolateral prefrontal cortex using different pulse patterns, which links to brain areas associated with reward conditioning. After each TBS, participants completed tasks assessing their reward responsiveness to monetary and sexual rewards. Electroencephalography (EEG) was recorded. They also reported their number of orgasms in the weekend following stimulation. This signal was malleable by TBS, where excitatory TBS resulted in lower EEG alpha relative to inhibitory TBS to primary rewards. EEG responses to sexual rewards in the lab (following both forms of TBS) predicted the number of orgasms experienced over the forthcoming weekend. TBS may be useful in modifying hypersensitivity or hyposensitivity to primary rewards that predict sexual behaviors. Since TBS altered the anticipation of a sexual reward, TBS may offer a novel treatment for sexual desire problems. PMID:27902711
EEG to Primary Rewards: Predictive Utility and Malleability by Brain Stimulation.
Prause, Nicole; Siegle, Greg J; Deblieck, Choi; Wu, Allan; Iacoboni, Marco
2016-01-01
Theta burst stimulation (TBS) is thought to affect reward processing mechanisms, which may increase and decrease reward sensitivity. To test the ability of TBS to modulate response to strong primary rewards, participants hypersensitive to primary rewards were recruited. Twenty men and women with at least two opposite-sex, sexual partners in the last year received two forms of TBS. Stimulations were randomized to avoid order effects and separated by 2 hours to reduce carryover. The two TBS forms have been demonstrated to inhibit (continuous) or excite (intermittent) the left dorsolateral prefrontal cortex using different pulse patterns, which links to brain areas associated with reward conditioning. After each TBS, participants completed tasks assessing their reward responsiveness to monetary and sexual rewards. Electroencephalography (EEG) was recorded. They also reported their number of orgasms in the weekend following stimulation. This signal was malleable by TBS, where excitatory TBS resulted in lower EEG alpha relative to inhibitory TBS to primary rewards. EEG responses to sexual rewards in the lab (following both forms of TBS) predicted the number of orgasms experienced over the forthcoming weekend. TBS may be useful in modifying hypersensitivity or hyposensitivity to primary rewards that predict sexual behaviors. Since TBS altered the anticipation of a sexual reward, TBS may offer a novel treatment for sexual desire problems.
EEG abnormalities and epilepsy in autistic spectrum disorders: clinical and familial correlates.
Ekinci, Ozalp; Arman, Ayşe Rodopman; Işik, Uğur; Bez, Yasin; Berkem, Meral
2010-02-01
Our aim was to examine the characteristics of EEG findings and epilepsy in autistic spectrum disorders (ASD) and the associated clinical and familial risk factors. Fifty-seven children (86% male) with ASD, mean age 82+/-36.2 months, were included in the study. Thirty-nine (68.4%) children had the diagnosis of autism, 15 (26.3%) had Pervasive Developmental Disorder Not Otherwise Specified, and 3 (5.3%) had high-functioning autism. One hour of sleep and/or awake EEG recordings was obtained for each child. All patients were evaluated with respect to clinical and familial characteristics and with the Childhood Autism Rating Scale, the Autism Behavior Checklist, and the Aberrant Behavior Checklist. The frequency of interictal epileptiform EEG abnormalities (IIEAs) was 24.6% (n=14), and the frequency of epilepsy was 14.2% (n=8). IIEAs were associated with a diagnosis of epilepsy (P=0.0001), Childhood Autism Rating Scale Activity scores (P=0.047), and a history of asthma and allergy (P=0.044). Epilepsy was associated with a family history of epilepsy (P=0.049) and psychiatric problems in the mother during pregnancy (P=0.0026). Future studies with larger samples will help to clarify the possible associations of epilepsy/IIEAs with asthma/allergy, hyperactivity, and familial factors in ASD. (c) 2009 Elsevier Inc. All rights reserved.
Wen, Tingxi; Zhang, Zhongnan
2017-05-01
In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.
Soft, Comfortable Polymer Dry Electrodes for High Quality ECG and EEG Recording
Chen, Yun-Hsuan; de Beeck, Maaike Op; Vanderheyden, Luc; Carrette, Evelien; Mihajlović, Vojkan; Vanstreels, Kris; Grundlehner, Bernard; Gadeyne, Stefanie; Boon, Paul; Van Hoof, Chris
2014-01-01
Conventional gel electrodes are widely used for biopotential measurements, despite important drawbacks such as skin irritation, long set-up time and uncomfortable removal. Recently introduced dry electrodes with rigid metal pins overcome most of these problems; however, their rigidity causes discomfort and pain. This paper presents dry electrodes offering high user comfort, since they are fabricated from EPDM rubber containing various additives for optimum conductivity, flexibility and ease of fabrication. The electrode impedance is measured on phantoms and human skin. After optimization of the polymer composition, the skin-electrode impedance is only ∼10 times larger than that of gel electrodes. Therefore, these electrodes are directly capable of recording strong biopotential signals such as ECG while for low-amplitude signals such as EEG, the electrodes need to be coupled with an active circuit. EEG recordings using active polymer electrodes connected to a clinical EEG system show very promising results: alpha waves can be clearly observed when subjects close their eyes, and correlation and coherence analyses reveal high similarity between dry and gel electrode signals. Moreover, all subjects reported that our polymer electrodes did not cause discomfort. Hence, the polymer-based dry electrodes are promising alternatives to either rigid dry electrodes or conventional gel electrodes. PMID:25513825
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
Time-reversal and Bayesian inversion
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
LORETA EEG phase reset of the default mode network
Thatcher, Robert W.; North, Duane M.; Biver, Carl J.
2014-01-01
Objectives: The purpose of this study was to explore phase reset of 3-dimensional current sources in Brodmann areas located in the human default mode network (DMN) using Low Resolution Electromagnetic Tomography (LORETA) of the human electroencephalogram (EEG). Methods: The EEG was recorded from 19 scalp locations from 70 healthy normal subjects ranging in age from 13 to 20 years. A time point by time point computation of LORETA current sources were computed for 14 Brodmann areas comprising the DMN in the delta frequency band. The Hilbert transform of the LORETA time series was used to compute the instantaneous phase differences between all pairs of Brodmann areas. Phase shift and lock durations were calculated based on the 1st and 2nd derivatives of the time series of phase differences. Results: Phase shift duration exhibited three discrete modes at approximately: (1) 25 ms, (2) 50 ms, and (3) 65 ms. Phase lock duration present primarily at: (1) 300–350 ms and (2) 350–450 ms. Phase shift and lock durations were inversely related and exhibited an exponential change with distance between Brodmann areas. Conclusions: The results are explained by local neural packing density of network hubs and an exponential decrease in connections with distance from a hub. The results are consistent with a discrete temporal model of brain function where anatomical hubs behave like a “shutter” that opens and closes at specific durations as nodes of a network giving rise to temporarily phase locked clusters of neurons for specific durations. PMID:25100976
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.
NASA Astrophysics Data System (ADS)
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology, Finland), Masahiro Yamamoto (University of Tokyo, Japan), Gunther Uhlmann (University of Washington) and Jun Zou (Chinese University of Hong Kong). IPIA is a recently formed organization that intends to promote the field of inverse problem at all levels. See http://www.inverse-problems.net/. IPIA awarded the first Calderón prize at the opening of the conference to Matti Lassas (see first article in the Proceedings). There was also a general meeting of IPIA during the workshop. This was probably the largest conference ever on IP with 350 registered participants. The program consisted of 18 invited speakers and the Calderón Prize Lecture given by Matti Lassas. Another integral part of the program was the more than 60 mini-symposia that covered a broad spectrum of the theory and applications of inverse problems, focusing on recent developments in medical imaging, seismic exploration, remote sensing, industrial applications, numerical and regularization methods in inverse problems. Another important related topic was image processing in particular the advances which have allowed for significant enhancement of widely used imaging techniques. For more details on the program see the web page: http://www.pims.math.ca/science/2007/07aip. These proceedings reflect the broad spectrum of topics covered in AIP 2007. The conference and these proceedings would not have happened without the contributions of many people. I thank all my fellow organizers, the invited speakers, the speakers and organizers of mini-symposia for making this an exciting and vibrant event. I also thank PIMS, NSF and MITACS for their generous financial support. I take this opportunity to thank the PIMS staff, particularly Ken Leung, for making the local arrangements. Also thanks are due to Stephen McDowall for his help in preparing the schedule of the conference and Xiaosheng Li for the help in preparing these proceedings. I also would like to thank the contributors of this volume and the referees. Finally, many thanks are due to Graham Douglas and Elaine Longden-Chapman for suggesting publication in Journal of Physics: Conference Series.
NASA Astrophysics Data System (ADS)
Tandon, K.; Egbert, G.; Siripunvaraporn, W.
2003-12-01
We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.
Comparison of Amplitude-Integrated EEG and Conventional EEG in a Cohort of Premature Infants.
Meledin, Irina; Abu Tailakh, Muhammad; Gilat, Shlomo; Yogev, Hagai; Golan, Agneta; Novack, Victor; Shany, Eilon
2017-03-01
To compare amplitude-integrated EEG (aEEG) and conventional EEG (EEG) activity in premature neonates. Biweekly aEEG and EEG were simultaneously recorded in a cohort of infants born less than 34 weeks gestation. aEEG recordings were visually assessed for lower and upper border amplitude and bandwidth. EEG recordings were compressed for visual evaluation of continuity and assessed using a signal processing software for interburst intervals (IBI) and frequencies' amplitude. Ten-minute segments of aEEG and EEG indices were compared using regression analysis. A total of 189 recordings from 67 infants were made, from which 1697 aEEG/EEG pairs of 10-minute segments were assessed. Good concordance was found for visual assessment of continuity between the 2 methods. EEG IBI, alpha and theta frequencies' amplitudes were negatively correlated to the aEEG lower border while conceptional age (CA) was positively correlated to aEEG lower border ( P < .001). IBI and all frequencies' amplitude were positively correlated to the upper aEEG border ( P ≤ .001). CA was negatively correlated to aEEG span while IBI, alpha, beta, and theta frequencies' amplitude were positively correlated to the aEEG span. Important information is retained and integrated in the transformation of premature neonatal EEG to aEEG. aEEG recordings in high-risk premature neonates reflect reliably EEG background information related to continuity and amplitude.
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
NASA Astrophysics Data System (ADS)
Delahunty, Thomas; Seery, Niall; Lynch, Raymond
2018-04-01
Currently, there is significant interest being directed towards the development of STEM education to meet economic and societal demands. While economic concerns can be a powerful driving force in advancing the STEM agenda, care must be taken that such economic imperative does not promote research approaches that overemphasize pragmatic application at the expense of augmenting the fundamental knowledge base of the discipline. This can be seen in the predominance of studies investigating problem solving approaches and procedures, while neglecting representational and conceptual processes, within the literature. Complementing concerns about STEM graduates' problem solving capabilities, raised within the pertinent literature, this paper discusses a novel methodological approach aimed at investigating the cognitive elements of problem conceptualization. The intention is to demonstrate a novel method of data collection that overcomes some of the limitations cited in classic problem solving research while balancing a search for fundamental understanding with the possibility of application. The methodology described in this study employs an electroencephalographic (EEG) headset, as part of a mixed methods approach, to gather objective evidence of students' cognitive processing during problem solving epochs. The method described provides rich evidence of students' cognitive representations of problems during episodes of applied reasoning. The reliability and validity of the EEG method is supported by the stability of the findings across the triangulated data sources. The paper presents a novel method in the context of research within STEM education and demonstrates an effective procedure for gathering rich evidence of cognitive processing during the early stages of problem conceptualization.
Association between pubertal stage at first drink and neural reward processing in early adulthood.
Boecker-Schlier, Regina; Holz, Nathalie E; Hohm, Erika; Zohsel, Katrin; Blomeyer, Dorothea; Buchmann, Arlette F; Baumeister, Sarah; Wolf, Isabella; Esser, Günter; Schmidt, Martin H; Meyer-Lindenberg, Andreas; Banaschewski, Tobias; Brandeis, Daniel; Laucht, Manfred
2017-09-01
Puberty is a critical time period during human development. It is characterized by high levels of risk-taking behavior, such as increased alcohol consumption, and is accompanied by various neurobiological changes. Recent studies in animals and humans have revealed that the pubertal stage at first drink (PSFD) significantly impacts drinking behavior in adulthood. Moreover, neuronal alterations of the dopaminergic reward system have been associated with alcohol abuse or addiction. This study aimed to clarify the impact of PSFD on neuronal characteristics of reward processing linked to alcohol-related problems. One hundred sixty-eight healthy young adults from a prospective study covering 25 years participated in a monetary incentive delay task measured with simultaneous EEG-fMRI. PSFD was determined according to the age at menarche or Tanner stage of pubertal development, respectively. Alcohol-related problems in early adulthood were assessed with the Alcohol Use Disorder Identification Test (AUDIT). During reward anticipation, decreased fMRI activation of the frontal cortex and increased preparatory EEG activity (contingent negative variation) occurred with pubertal compared to postpubertal first alcohol intake. Moreover, alcohol-related problems during early adulthood were increased in pubertal compared to postpubertal beginners, which was mediated by neuronal activation of the right medial frontal gyrus. At reward delivery, increased fMRI activation of the left caudate and higher feedback-related EEG negativity were detected in pubertal compared to postpubertal beginners. Together with animal findings, these results implicate PSFD as a potential modulator of psychopathology, involving altered reward anticipation. Both PSFD timing and reward processing might thus be potential targets for early prevention and intervention. © 2016 Society for the Study of Addiction.
A Riemann-Hilbert approach to the inverse problem for the Stark operator on the line
NASA Astrophysics Data System (ADS)
Its, A.; Sukhanov, V.
2016-05-01
The paper is concerned with the inverse scattering problem for the Stark operator on the line with a potential from the Schwartz class. In our study of the inverse problem, we use the Riemann-Hilbert formalism. This allows us to overcome the principal technical difficulties which arise in the more traditional approaches based on the Gel’fand-Levitan-Marchenko equations, and indeed solve the problem. We also produce a complete description of the relevant scattering data (which have not been obtained in the previous works on the Stark operator) and establish the bijection between the Schwartz class potentials and the scattering data.
Hinault, Thomas; Lemaire, Patrick; Phillips, Natalie
2016-01-01
This study investigated age-related differences in electrophysiological signatures of sequential modulations of poorer strategy effects. Sequential modulations of poorer strategy effects refer to decreased poorer strategy effects (i.e., poorer performance when the cued strategy is not the best) on current problem following poorer strategy problems compared to after better strategy problems. Analyses on electrophysiological (EEG) data revealed important age-related changes in time, frequency, and coherence of brain activities underlying sequential modulations of poorer strategy effects. More specifically, sequential modulations of poorer strategy effects were associated with earlier and later time windows (i.e., between 200- and 550 ms and between 850- and 1250 ms). Event-related potentials (ERPs) also revealed an earlier onset in older adults, together with more anterior and less lateralized activations. Furthermore, sequential modulations of poorer strategy effects were associated with theta and alpha frequencies in young adults while these modulations were found in delta frequency and theta inter-hemispheric coherence in older adults, consistent with qualitatively distinct patterns of brain activity. These findings have important implications to further our understanding of age-related differences and similarities in sequential modulations of cognitive control processes during arithmetic strategy execution. Copyright © 2015 Elsevier B.V. All rights reserved.
A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees.
Li, Xiangxin; Samuel, Oluwarotimi Williams; Zhang, Xu; Wang, Hui; Fang, Peng; Li, Guanglin
2017-01-07
Most of the modern motorized prostheses are controlled with the surface electromyography (sEMG) recorded on the residual muscles of amputated limbs. However, the residual muscles are usually limited, especially after above-elbow amputations, which would not provide enough sEMG for the control of prostheses with multiple degrees of freedom. Signal fusion is a possible approach to solve the problem of insufficient control commands, where some non-EMG signals are combined with sEMG signals to provide sufficient information for motion intension decoding. In this study, a motion-classification method that combines sEMG and electroencephalography (EEG) signals were proposed and investigated, in order to improve the control performance of upper-limb prostheses. Four transhumeral amputees without any form of neurological disease were recruited in the experiments. Five motion classes including hand-open, hand-close, wrist-pronation, wrist-supination, and no-movement were specified. During the motion performances, sEMG and EEG signals were simultaneously acquired from the skin surface and scalp of the amputees, respectively. The two types of signals were independently preprocessed and then combined as a parallel control input. Four time-domain features were extracted and fed into a classifier trained by the Linear Discriminant Analysis (LDA) algorithm for motion recognition. In addition, channel selections were performed by using the Sequential Forward Selection (SFS) algorithm to optimize the performance of the proposed method. The classification performance achieved by the fusion of sEMG and EEG signals was significantly better than that obtained by single signal source of either sEMG or EEG. An increment of more than 14% in classification accuracy was achieved when using a combination of 32-channel sEMG and 64-channel EEG. Furthermore, based on the SFS algorithm, two optimized electrode arrangements (10-channel sEMG + 10-channel EEG, 10-channel sEMG + 20-channel EEG) were obtained with classification accuracies of 84.2 and 87.0%, respectively, which were about 7.2 and 10% higher than the accuracy by using only 32-channel sEMG input. This study demonstrated the feasibility of fusing sEMG and EEG signals towards improving motion classification accuracy for above-elbow amputees, which might enhance the control performances of multifunctional myoelectric prostheses in clinical application. The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.
Applicability of the "Emotiv EEG Neuroheadset" as a user-friendly input interface.
Boutani, Hidenori; Ohsuga, Mieko
2013-01-01
We aimed to develop an input interface by using the P3 component of visual event-related potentials (ERPs). When using electroencephalography (EEG) in daily applications, coping with ocular-motor artifacts and ensuring that the equipment is user-friendly are both important. To address the first issue, we applied a previously proposed method that applies an unmixing matrix to acquire independent components (ICs) obtained from another dataset. For the second issue, we introduced a 14-channel EEG commercial headset called the "Emotiv EEG Neuroheadset". An advantage of the Emotiv headset is that users can put it on by themselves within 1 min without any specific skills. However, only a few studies have investigated whether EEG and ERP signals are accurately measured by Emotiv. Additionally, no electrodes of the Emotiv headset are located over the centroparietal area of the head where P3 components are reported to show large amplitudes. Therefore, we first demonstrated that the P3 components obtained by the headset and by commercial plate electrodes and a multipurpose bioelectric amplifier during an oddball task were comparable. Next, we confirmed that eye-blink and ocular movement components could be decomposed by independent component analysis (ICA) using the 14-channel signals measured by the headset. We also demonstrated that artifacts could be removed with an unmixing matrix, as long as the matrix was obtained from the same person, even if they were measured on different days. Finally, we confirmed that the fluctuation of the sampling frequency of the Emotiv headset was not a major problem.
Van Regenmortel, Marc H. V.
2018-01-01
Hypotheses and theories are essential constituents of the scientific method. Many vaccinologists are unaware that the problems they try to solve are mostly inverse problems that consist in imagining what could bring about a desired outcome. An inverse problem starts with the result and tries to guess what are the multiple causes that could have produced it. Compared to the usual direct scientific problems that start with the causes and derive or calculate the results using deductive reasoning and known mechanisms, solving an inverse problem uses a less reliable inductive approach and requires the development of a theoretical model that may have different solutions or none at all. Unsuccessful attempts to solve inverse problems in HIV vaccinology by reductionist methods, systems biology and structure-based reverse vaccinology are described. The popular strategy known as rational vaccine design is unable to solve the multiple inverse problems faced by HIV vaccine developers. The term “rational” is derived from “rational drug design” which uses the 3D structure of a biological target for designing molecules that will selectively bind to it and inhibit its biological activity. In vaccine design, however, the word “rational” simply means that the investigator is concentrating on parts of the system for which molecular information is available. The economist and Nobel laureate Herbert Simon introduced the concept of “bounded rationality” to explain why the complexity of the world economic system makes it impossible, for instance, to predict an event like the financial crash of 2007–2008. Humans always operate under unavoidable constraints such as insufficient information, a limited capacity to process huge amounts of data and a limited amount of time available to reach a decision. Such limitations always prevent us from achieving the complete understanding and optimization of a complex system that would be needed to achieve a truly rational design process. This is why the complexity of the human immune system prevents us from rationally designing an HIV vaccine by solving inverse problems. PMID:29387066
2012-01-01
Background We describe and characterize the performance of microEEG compared to that of a commercially available and widely used clinical EEG machine. microEEG is a portable, battery-operated, wireless EEG device, developed by Bio-Signal Group to overcome the obstacles to routine use of EEG in emergency departments (EDs). Methods The microEEG was used to obtain EEGs from healthy volunteers in the EEG laboratory and ED. The standard system was used to obtain EEGs from healthy volunteers in the EEG laboratory, and studies recorded from patients in the ED or ICU were also used for comparison. In one experiment, a signal splitter was used to record simultaneous microEEG and standard EEG from the same electrodes. Results EEG signal analysis techniques indicated good agreement between microEEG and the standard system in 66 EEGs recorded in the EEG laboratory and the ED. In the simultaneous recording the microEEG and standard system signals differed only in a smaller amount of 60 Hz noise in the microEEG signal. In a blinded review by a board-certified clinical neurophysiologist, differences in technical quality or interpretability were insignificant between standard recordings in the EEG laboratory and microEEG recordings from standard or electrode cap electrodes in the ED or EEG laboratory. The microEEG data recording characteristics such as analog-to-digital conversion resolution (16 bits), input impedance (>100MΩ), and common-mode rejection ratio (85 dB) are similar to those of commercially available systems, although the microEEG is many times smaller (88 g and 9.4 × 4.4 × 3.8 cm). Conclusions Our results suggest that the technical qualities of microEEG are non-inferior to a standard commercially available EEG recording device. EEG in the ED is an unmet medical need due to space and time constraints, high levels of ambient electrical noise, and the cost of 24/7 EEG technologist availability. This study suggests that using microEEG with an electrode cap that can be applied easily and quickly can surmount these obstacles without compromising technical quality. PMID:23006616
The importance of coherence in inverse problems in optics
NASA Astrophysics Data System (ADS)
Ferwerda, H. A.; Baltes, H. P.; Glass, A. S.; Steinle, B.
1981-12-01
Current inverse problems of statistical optics are presented with a guide to relevant literature. The inverse problems are categorized into four groups, and the Van Cittert-Zernike theorem and its generalization are discussed. The retrieval of structural information from the far-zone degree of coherence and the time-averaged intensity distribution of radiation scattered by a superposition of random and periodic scatterers are also discussed. In addition, formulas for the calculation of far-zone properties are derived within the framework of scalar optics, and results are applied to two examples.
Contradictory Reasoning Network: An EEG and fMRI Study
Thai, Ngoc Jade; Seri, Stefano; Rotshtein, Pia; Tecchio, Franca
2014-01-01
Contradiction is a cornerstone of human rationality, essential for everyday life and communication. We investigated electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) in separate recording sessions during contradictory judgments, using a logical structure based on categorical propositions of the Aristotelian Square of Opposition (ASoO). The use of ASoO propositions, while controlling for potential linguistic or semantic confounds, enabled us to observe the spatial temporal unfolding of this contradictory reasoning. The processing started with the inversion of the logical operators corresponding to right middle frontal gyrus (rMFG-BA11) activation, followed by identification of contradictory statement associated with in the right inferior frontal gyrus (rIFG-BA47) activation. Right medial frontal gyrus (rMeFG, BA10) and anterior cingulate cortex (ACC, BA32) contributed to the later stages of process. We observed a correlation between the delayed latency of rBA11 response and the reaction time delay during inductive vs. deductive reasoning. This supports the notion that rBA11 is crucial for manipulating the logical operators. Slower processing time and stronger brain responses for inductive logic suggested that examples are easier to process than general principles and are more likely to simplify communication. PMID:24667491
Contradictory reasoning network: an EEG and FMRI study.
Porcaro, Camillo; Medaglia, Maria Teresa; Thai, Ngoc Jade; Seri, Stefano; Rotshtein, Pia; Tecchio, Franca
2014-01-01
Contradiction is a cornerstone of human rationality, essential for everyday life and communication. We investigated electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) in separate recording sessions during contradictory judgments, using a logical structure based on categorical propositions of the Aristotelian Square of Opposition (ASoO). The use of ASoO propositions, while controlling for potential linguistic or semantic confounds, enabled us to observe the spatial temporal unfolding of this contradictory reasoning. The processing started with the inversion of the logical operators corresponding to right middle frontal gyrus (rMFG-BA11) activation, followed by identification of contradictory statement associated with in the right inferior frontal gyrus (rIFG-BA47) activation. Right medial frontal gyrus (rMeFG, BA10) and anterior cingulate cortex (ACC, BA32) contributed to the later stages of process. We observed a correlation between the delayed latency of rBA11 response and the reaction time delay during inductive vs. deductive reasoning. This supports the notion that rBA11 is crucial for manipulating the logical operators. Slower processing time and stronger brain responses for inductive logic suggested that examples are easier to process than general principles and are more likely to simplify communication.
Down syndrome's brain dynamics: analysis of fractality in resting state.
Hemmati, Sahel; Ahmadlou, Mehran; Gharib, Masoud; Vameghi, Roshanak; Sajedi, Firoozeh
2013-08-01
To the best knowledge of the authors there is no study on nonlinear brain dynamics of down syndrome (DS) patients, whereas brain is a highly complex and nonlinear system. In this study, fractal dimension of EEG, as a key characteristic of brain dynamics, showing irregularity and complexity of brain dynamics, was used for evaluation of the dynamical changes in the DS brain. The results showed higher fractality of the DS brain in almost all regions compared to the normal brain, which indicates less centrality and higher irregular or random functioning of the DS brain regions. Also, laterality analysis of the frontal lobe showed that the normal brain had a right frontal laterality of complexity whereas the DS brain had an inverse pattern (left frontal laterality). Furthermore, the high accuracy of 95.8 % obtained by enhanced probabilistic neural network classifier showed the potential of nonlinear dynamic analysis of the brain for diagnosis of DS patients. Moreover, the results showed that the higher EEG fractality in DS is associated with the higher fractality in the low frequencies (delta and theta), in broad regions of the brain, and the high frequencies (beta and gamma), majorly in the frontal regions.
Impaired cortical activation in autistic children: is the mirror neuron system involved?
Martineau, Joëlle; Cochin, Stéphanie; Magne, Rémy; Barthelemy, Catherine
2008-04-01
The inability to imitate becomes obvious early in autistic children and seems to contribute to learning delay and to disorders of communication and contact. Posture, motility and imitation disorders in autistic syndrome might be the consequence of an abnormality of sensori-motor integration, related to the visual perception of movement, and could reflect impairment of the mirror neuron system (MNS). We compared EEG activity during the observation of videos showing actions or still scenes in 14 right-handed autistic children and 14 right-handed, age- and gender-matched control children (3 girls and 11 boys, aged 5 years 3 months-7 years 11 months). We showed desynchronisation of the EEG in the motor cerebral cortex and the frontal and temporal areas during observation of human actions in the group of healthy children. No such desynchronisation was found in autistic children. Moreover, inversion of the pattern of hemispheric activation was found in autistic children, with increased cortical activity in the right hemisphere in the posterior region, including the centro-parietal and temporo-occipital sites. These results are in agreement with the hypothesis of impairment of the mirror neuron system in autistic disorder.
Comparison of iterative inverse coarse-graining methods
NASA Astrophysics Data System (ADS)
Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.
2016-10-01
Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Seizure Forecasting and the Preictal State in Canine Epilepsy.
Varatharajah, Yogatheesan; Iyer, Ravishankar K; Berry, Brent M; Worrell, Gregory A; Brinkmann, Benjamin H
2017-02-01
The ability to predict seizures may enable patients with epilepsy to better manage their medications and activities, potentially reducing side effects and improving quality of life. Forecasting epileptic seizures remains a challenging problem, but machine learning methods using intracranial electroencephalographic (iEEG) measures have shown promise. A machine-learning-based pipeline was developed to process iEEG recordings and generate seizure warnings. Results support the ability to forecast seizures at rates greater than a Poisson random predictor for all feature sets and machine learning algorithms tested. In addition, subject-specific neurophysiological changes in multiple features are reported preceding lead seizures, providing evidence supporting the existence of a distinct and identifiable preictal state.
SEIZURE FORECASTING AND THE PREICTAL STATE IN CANINE EPILEPSY
Varatharajah, Yogatheesan; Iyer, Ravishankar K.; Berry, Brent M.; Worrell, Gregory A.; Brinkmann, Benjamin H.
2017-01-01
The ability to predict seizures may enable patients with epilepsy to better manage their medications and activities, potentially reducing side effects and improving quality of life. Forecasting epileptic seizures remains a challenging problem, but machine learning methods using intracranial electroencephalographic (iEEG) measures have shown promise. A machine-learning-based pipeline was developed to process iEEG recordings and generate seizure warnings. Results support the ability to forecast seizures at rates greater than a Poisson random predictor for all feature sets and machine learning algorithms tested. In addition, subject-specific neurophysiological changes in multiple features are reported preceding lead seizures, providing evidence supporting the existence of a distinct and identifiable preictal state. PMID:27464854
Variable-permittivity linear inverse problem for the H(sub z)-polarized case
NASA Technical Reports Server (NTRS)
Moghaddam, M.; Chew, W. C.
1993-01-01
The H(sub z)-polarized inverse problem has rarely been studied before due to the complicated way in which the unknown permittivity appears in the wave equation. This problem is equivalent to the acoustic inverse problem with variable density. We have recently reported the solution to the nonlinear variable-permittivity H(sub z)-polarized inverse problem using the Born iterative method. Here, the linear inverse problem is solved for permittivity (epsilon) and permeability (mu) using a different approach which is an extension of the basic ideas of diffraction tomography (DT). The key to solving this problem is to utilize frequency diversity to obtain the required independent measurements. The receivers are assumed to be in the far field of the object, and plane wave incidence is also assumed. It is assumed that the scatterer is weak, so that the Born approximation can be used to arrive at a relationship between the measured pressure field and two terms related to the spatial Fourier transform of the two unknowns, epsilon and mu. The term involving permeability corresponds to monopole scattering and that for permittivity to dipole scattering. Measurements at several frequencies are used and a least squares problem is solved to reconstruct epsilon and mu. It is observed that the low spatial frequencies in the spectra of epsilon and mu produce inaccuracies in the results. Hence, a regularization method is devised to remove this problem. Several results are shown. Low contrast objects for which the above analysis holds are used to show that good reconstructions are obtained for both permittivity and permeability after regularization is applied.
On computational experiments in some inverse problems of heat and mass transfer
NASA Astrophysics Data System (ADS)
Bilchenko, G. G.; Bilchenko, N. G.
2016-11-01
The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.
Myoclonic Jerks and Schizophreniform Syndrome: Case Report and Literature Review.
Endres, Dominique; Altenmüller, Dirk-M; Feige, Bernd; Maier, Simon J; Nickel, Kathrin; Hellwig, Sabine; Rausch, Jördis; Ziegler, Christiane; Domschke, Katharina; Doerr, John P; Egger, Karl; Tebartz van Elst, Ludger
2018-01-01
Background: Schizophreniform syndromes can be divided into primary idiopathic forms as well as different secondary organic subgroups (e.g., paraepileptic, epileptic, immunological, or degenerative). Secondary epileptic explanatory approaches have often been discussed in the past, due to the high rates of electroencephalography (EEG) alterations in patients with schizophrenia. In particular, temporal lobe epilepsy is known to be associated with schizophreniform symptoms in well-described constellations. In the literature, juvenile myoclonic epilepsy has been linked to emotionally unstable personality traits, depression, anxiety, and executive dysfunction; however, the association with schizophrenia is largely unclear. Case presentation: We present the case of a 28-year-old male student suffering from mild myoclonic jerks, mainly of the upper limbs, as well as a predominant paranoid-hallucinatory syndrome with attention deficits, problems with working memory, depressive-flat mood, reduced energy, fast stimulus satiation, delusional and audible thoughts, tactile hallucinations, thought inspirations, and severe sleep disturbances. Cerebral magnetic resonance imaging and cerebrospinal fluid analyses revealed no relevant abnormalities. The routine EEG and the first EEG after sleep deprivation (under treatment with oxazepam) also returned normal findings. Video telemetry over one night, which included a partial sleep-deprivation EEG, displayed short generalized spike-wave complexes and polyspikes, associated with myoclonic jerks, after waking in the morning. Video-EEG monitoring over 5 days showed over 100 myoclonic jerks of the upper limbs, frequently with generalized spike-wave complexes with left or right accentuation. Therefore, we diagnosed juvenile myoclonic epilepsy. Discussion: This case report illustrates the importance of extended EEG diagnostics in patients with schizophreniform syndromes and myoclonic jerks. The schizophreniform symptoms in the framework of epileptiform EEG activity can be interpreted as a (para)epileptic mechanism due to local area network inhibition (LANI). Following the LANI hypothesis, paranoid hallucinatory symptoms are not due to primary excitatory activity (as myoclonic jerks are) but rather to the secondary process of hyperinhibition triggered by epileptic activity. Identifying subgroups of schizophreniform patients with comorbid epilepsy is important because of the potential benefits of optimized pharmacological treatment.
Clinical relevance of cannabis tolerance and dependence.
Jones, R T; Benowitz, N L; Herning, R I
1981-01-01
Psychoactive drugs are often widely used before tolerance and dependence is fully appreciated. Tolerance to cannabis-induced cardiovascular and autonomic changes, decreased intraocular pressure, sleep and sleep EEG, mood and behavioral changes is acquired and, to a great degree, lost rapidly with optimal conditions. Mechanisms appear more functional than metabolic. Acquisition rate depends on dose and dose schedule. Dependence, manifested by withdrawal symptoms after as little as 7 days of THC administration, is characterized by irritability, restlessness, insomnia, anorexia, nausea, sweating, salivation, increased body temperature, altered sleep and waking EEG, tremor, and weight loss. Mild and transient in the 120 subjects studied, the syndrome was similar to sedative drug withdrawal. Tolerance to drug side effects can be useful. Tolerance to therapeutic effects or target symptoms poses problems. Clinical significance of dependence is difficult to assess since drug-seeking behavior has many determinants. Cannabis-induced super sensitivity should be considered wherever chronic drug administration is anticipated in conditions like epilepsy, glaucoma or chronic pain. Cannabis pharmacology suggests ways of minimizing tolerance and dependence problems.
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Termination Proofs for String Rewriting Systems via Inverse Match-Bounds
NASA Technical Reports Server (NTRS)
Butler, Ricky (Technical Monitor); Geser, Alfons; Hofbauer, Dieter; Waldmann, Johannes
2004-01-01
Annotating a letter by a number, one can record information about its history during a reduction. A string rewriting system is called match-bounded if there is a global upper bound to these numbers. In earlier papers we established match-boundedness as a strong sufficient criterion for both termination and preservation of regular languages. We show now that the string rewriting system whose inverse (left and right hand sides exchanged) is match-bounded, also have exceptional properties, but slightly different ones. Inverse match-bounded systems effectively preserve context-free languages; their sets of normalized strings and their sets of immortal strings are effectively regular. These sets of strings can be used to decide the normalization, the termination and the uniform termination problems of inverse match-bounded systems. We also show that the termination problem is decidable in linear time, and that a certain strong reachability problem is deciable, thus solving two open problems of McNaughton's.
The inverse Wiener polarity index problem for chemical trees.
Du, Zhibin; Ali, Akbar
2018-01-01
The Wiener polarity number (which, nowadays, known as the Wiener polarity index and usually denoted by Wp) was devised by the chemist Harold Wiener, for predicting the boiling points of alkanes. The index Wp of chemical trees (chemical graphs representing alkanes) is defined as the number of unordered pairs of vertices (carbon atoms) at distance 3. The inverse problems based on some well-known topological indices have already been addressed in the literature. The solution of such inverse problems may be helpful in speeding up the discovery of lead compounds having the desired properties. This paper is devoted to solving a stronger version of the inverse problem based on Wiener polarity index for chemical trees. More precisely, it is proved that for every integer t ∈ {n - 3, n - 2,…,3n - 16, 3n - 15}, n ≥ 6, there exists an n-vertex chemical tree T such that Wp(T) = t.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
NASA Astrophysics Data System (ADS)
Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew
The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
NASA Astrophysics Data System (ADS)
An, M.; Assumpcao, M.
2003-12-01
The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
IPDO-2007: Inverse Problems, Design and Optimization Symposium
2007-08-01
Kanevce, G. H., Kanevce, Lj. P., and Mitrevski , V. B.), International Symposium on Inverse Problems, Design and Optimization (IPDO-2007), (eds...107 Gligor Kanevce Ljubica Kanevce Vangelce Mitrevski George Dulikravich 108 Gligor Kanevce Ljubica Kanevce Igor Andreevski George Dulikravich
Automated EEG sleep staging in the term-age baby using a generative modelling approach.
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
On the identification of sleep stages in mouse electroencephalography time-series.
Lampert, Thomas; Plano, Andrea; Austin, Jim; Platt, Bettina
2015-05-15
The automatic identification of sleep stages in electroencephalography (EEG) time-series is a long desired goal for researchers concerned with the study of sleep disorders. This paper presents advances towards achieving this goal, with particular application to EEG time-series recorded from mice. Approaches in the literature apply supervised learning classifiers, however, these do not reach the performance levels required for use within a laboratory. In this paper, detection reliability is increased, most notably in the case of REM stage identification, by naturally decomposing the problem and applying a support vector machine (SVM) based classifier to each of the EEG channels. Their outputs are integrated within a multiple classifier system. Furthermore, there exists no general consensus on the ideal choice of parameter values in such systems. Therefore, an investigation into the effects upon the classification performance is presented by varying parameters such as the epoch length; features size; number of training samples; and the method for calculating the power spectral density estimate. Finally, the results of these investigations are brought together to demonstrate the performance of the proposed classification algorithm in two cases: intra-animal classification and inter-animal classification. It is shown that, within a dataset of 10 EEG recordings, and using less than 1% of an EEG as training data, a mean classification errors of Awake 6.45%, NREM 5.82%, and REM 6.65% (with standard deviations less than 0.6%) are achieved in intra-animal analysis and, when using the equivalent of 7% of one EEG as training data, Awake 10.19%, NREM 7.75%, and REM 17.43% are achieved in inter-animal analysis (with mean standard deviations of 6.42%, 2.89%, and 9.69% respectively). A software package implementing the proposed approach will be made available through Cybula Ltd. Copyright © 2015 Elsevier B.V. All rights reserved.
Automated EEG sleep staging in the term-age baby using a generative modelling approach
NASA Astrophysics Data System (ADS)
Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten
2018-06-01
Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.
A preprocessing strategy for helioseismic inversions
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.; Thompson, M. J.
1993-05-01
Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.
NASA Astrophysics Data System (ADS)
Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro
2017-05-01
In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.
High density scalp EEG in frontal lobe epilepsy.
Feyissa, Anteneh M; Britton, Jeffrey W; Van Gompel, Jamie; Lagerlund, Terrance L; So, Elson; Wong-Kisiel, Lilly C; Cascino, Gregory C; Brinkman, Benjamin H; Nelson, Cindy L; Watson, Robert; Worrell, Gregory A
2017-01-01
Localization of seizures in frontal lobe epilepsy using the 10-20 system scalp EEG is often challenging because neocortical seizure can spread rapidly, significant muscle artifact, and the suboptimal spatial resolution for seizure generators involving mesial frontal lobe cortex. Our aim in this study was to determine the value of visual interpretation of 76 channel high density EEG (hdEEG) monitoring (10-10 system) in patients with suspected frontal lobe epilepsy, and to evaluate concordance with MRI, subtraction ictal SPECT co-registered to MRI (SISCOM), conventional EEG, and intracranial EEG (iEEG). We performed a retrospective cohort study of 14 consecutive patients who underwent hdEEG monitoring for suspected frontal lobe seizures. The gold standard for localization was considered to be iEEG. Concordance of hdEEG findings with MRI, subtraction ictal SPECT co-registered to MRI (SISCOM), conventional 10-20 EEG, and iEEG as well as correlation of hdEEG localization with surgical outcome were examined. hdEEG localization was concordant with iEEG in 12/14 and was superior to conventional EEG 3/14 (p<0.01) and SISCOM 3/12 (p<0.01). hdEEG correctly lateralized seizure onset in 14/14 cases, compared to 9/14 (p=0.04) cases with conventional EEG. Seven patients underwent surgical resection, of whom five were seizure free. hdEEG monitoring should be considered in patients with suspected frontal epilepsy requiring localization of epileptogenic brain. hdEEG may assist in developing a hypothesis for iEEG monitoring and could potentially augment EEG source localization. Published by Elsevier B.V.
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
2017-10-29
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
Comparing multiple statistical methods for inverse prediction in nuclear forensics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela
Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less
2015-12-15
UXO community . NAME Total Number: PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Irma Shamatava 0.50 0.50 1 Resolving and Discriminating...Distinguishing an object of interest from innocuous items is the main problem that the UXO community is facing currently. This inverse problem...innocuous items is the main problem that the UXO community is facing currently. This inverse problem demands fast and accurate representation of
2014-01-01
Background The fatigue that users suffer when using steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) can cause a number of serious problems such as signal quality degradation and system performance deterioration, users’ discomfort and even risk of photosensitive epileptic seizures, posing heavy restrictions on the applications of SSVEP-based BCIs. Towards alleviating the fatigue, a fundamental step is to measure and evaluate it but most existing works adopt self-reported questionnaire methods which are subjective, offline and memory dependent. This paper proposes an objective and real-time approach based on electroencephalography (EEG) spectral analysis to evaluate the fatigue in SSVEP-based BCIs. Methods How the EEG indices (amplitudes in δ, θ, α and β frequency bands), the selected ratio indices (θ/α and (θ + α)/β), and SSVEP properties (amplitude and signal-to-noise ratio (SNR)) changes with the increasing fatigue level are investigated through two elaborate SSVEP-based BCI experiments, one validates mainly the effectiveness and another considers more practical situations. Meanwhile, a self-reported fatigue questionnaire is used to provide a subjective reference. ANOVA is employed to test the significance of the difference between the alert state and the fatigue state for each index. Results Consistent results are obtained in two experiments: the significant increases in α and (θ + α)/β, as well as the decrease in θ/α are found associated with the increasing fatigue level, indicating that EEG spectral analysis can provide robust objective evaluation of the fatigue in SSVEP-based BCIs. Moreover, the results show that the amplitude and SNR of the elicited SSVEP are significantly affected by users’ fatigue. Conclusions The experiment results demonstrate the feasibility and effectiveness of the proposed method as an objective and real-time evaluation of the fatigue in SSVEP-based BCIs. This method would be helpful in understanding the fatigue problem and optimizing the system design to alleviate the fatigue in SSVEP-based BCIs. PMID:24621009
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
A multi-frequency iterative imaging method for discontinuous inverse medium problem
NASA Astrophysics Data System (ADS)
Zhang, Lei; Feng, Lixin
2018-06-01
The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Stritzel, J.; Melchert, O.; Wollweber, M.; Roth, B.
2017-09-01
The direct problem of optoacoustic signal generation in biological media consists of solving an inhomogeneous three-dimensional (3D) wave equation for an initial acoustic stress profile. In contrast, the more defiant inverse problem requires the reconstruction of the initial stress profile from a proper set of observed signals. In this article, we consider an effectively 1D approach, based on the assumption of a Gaussian transverse irradiation source profile and plane acoustic waves, in which the effects of acoustic diffraction are described in terms of a linear integral equation. The respective inverse problem along the beam axis can be cast into a Volterra integral equation of the second kind for which we explore here efficient numerical schemes in order to reconstruct initial stress profiles from observed signals, constituting a methodical progress of computational aspects of optoacoustics. In this regard, we explore the validity as well as the limits of the inversion scheme via numerical experiments, with parameters geared toward actual optoacoustic problem instances. The considered inversion input consists of synthetic data, obtained in terms of the effectively 1D approach, and, more generally, a solution of the 3D optoacoustic wave equation. Finally, we also analyze the effect of noise and different detector-to-sample distances on the optoacoustic signal and the reconstructed pressure profiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yun; Zhang, Yin
2016-06-08
The mass sensing superiority of a micro/nanomechanical resonator sensor over conventional mass spectrometry has been, or at least, is being firmly established. Because the sensing mechanism of a mechanical resonator sensor is the shifts of resonant frequencies, how to link the shifts of resonant frequencies with the material properties of an analyte formulates an inverse problem. Besides the analyte/adsorbate mass, many other factors such as position and axial force can also cause the shifts of resonant frequencies. The in-situ measurement of the adsorbate position and axial force is extremely difficult if not impossible, especially when an adsorbate is as smallmore » as a molecule or an atom. Extra instruments are also required. In this study, an inverse problem of using three resonant frequencies to determine the mass, position and axial force is formulated and solved. The accuracy of the inverse problem solving method is demonstrated and how the method can be used in the real application of a nanomechanical resonator is also discussed. Solving the inverse problem is helpful to the development and application of mechanical resonator sensor on two things: reducing extra experimental equipments and achieving better mass sensing by considering more factors.« less
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Viscoelastic material inversion using Sierra-SD and ROL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Timothy; Aquino, Wilkins; Ridzal, Denis
2014-11-01
In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.
An optimization method for the problems of thermal cloaking of material bodies
NASA Astrophysics Data System (ADS)
Alekseev, G. V.; Levin, V. A.
2016-11-01
Inverse heat-transfer problems related to constructing special thermal devices such as cloaking shells, thermal-illusion or thermal-camouflage devices, and heat-flux concentrators are studied. The heatdiffusion equation with a variable heat-conductivity coefficient is used as the initial heat-transfer model. An optimization method is used to reduce the above inverse problems to the respective control problem. The solvability of the above control problem is proved, an optimality system that describes necessary extremum conditions is derived, and a numerical algorithm for solving the control problem is proposed.
Jing Jin; Dauwels, Justin; Cash, Sydney; Westover, M Brandon
2014-01-01
Detection of interictal discharges is a key element of interpreting EEGs during the diagnosis and management of epilepsy. Because interpretation of clinical EEG data is time-intensive and reliant on experts who are in short supply, there is a great need for automated spike detectors. However, attempts to develop general-purpose spike detectors have so far been severely limited by a lack of expert-annotated data. Huge databases of interictal discharges are therefore in great demand for the development of general-purpose detectors. Detailed manual annotation of interictal discharges is time consuming, which severely limits the willingness of experts to participate. To address such problems, a graphical user interface "SpikeGUI" was developed in our work for the purposes of EEG viewing and rapid interictal discharge annotation. "SpikeGUI" substantially speeds up the task of annotating interictal discharges using a custom-built algorithm based on a combination of template matching and online machine learning techniques. While the algorithm is currently tailored to annotation of interictal epileptiform discharges, it can easily be generalized to other waveforms and signal types.
Yao, Dezhong
2017-03-01
Currently, average reference is one of the most widely adopted references in EEG and ERP studies. The theoretical assumption is the surface potential integral of a volume conductor being zero, thus the average of scalp potential recordings might be an approximation of the theoretically desired zero reference. However, such a zero integral assumption has been proved only for a spherical surface. In this short communication, three counter-examples are given to show that the potential integral over the surface of a dipole in a volume conductor may not be zero. It depends on the shape of the conductor and the orientation of the dipole. This fact on one side means that average reference is not a theoretical 'gold standard' reference, and on the other side reminds us that the practical accuracy of average reference is not only determined by the well-known electrode array density and its coverage but also intrinsically by the head shape. It means that reference selection still is a fundamental problem to be fixed in various EEG and ERP studies.
Pun, Thierry; Alecu, Teodor Iulian; Chanel, Guillaume; Kronegg, Julien; Voloshynovskiy, Sviatoslav
2006-06-01
This paper describes the work being conducted in the domain of brain-computer interaction (BCI) at the Multimodal Interaction Group, Computer Vision and Multimedia Laboratory, University of Geneva, Geneva, Switzerland. The application focus of this work is on multimodal interaction rather than on rehabilitation, that is how to augment classical interaction by means of physiological measurements. Three main research topics are addressed. The first one concerns the more general problem of brain source activity recognition from EEGs. In contrast with classical deterministic approaches, we studied iterative robust stochastic based reconstruction procedures modeling source and noise statistics, to overcome known limitations of current techniques. We also developed procedures for optimal electroencephalogram (EEG) sensor system design in terms of placement and number of electrodes. The second topic is the study of BCI protocols and performance from an information-theoretic point of view. Various information rate measurements have been compared for assessing BCI abilities. The third research topic concerns the use of EEG and other physiological signals for assessing a user's emotional status.
Jin, Jing; Dauwels, Justin; Cash, Sydney; Westover, M. Brandon
2015-01-01
Detection of interictal discharges is a key element of interpreting EEGs during the diagnosis and management of epilepsy. Because interpretation of clinical EEG data is time-intensive and reliant on experts who are in short supply, there is a great need for automated spike detectors. However, attempts to develop general-purpose spike detectors have so far been severely limited by a lack of expert-annotated data. Huge databases of interictal discharges are therefore in great demand for the development of general-purpose detectors. Detailed manual annotation of interictal discharges is time consuming, which severely limits the willingness of experts to participate. To address such problems, a graphical user interface “SpikeGUI” was developed in our work for the purposes of EEG viewing and rapid interictal discharge annotation. “SpikeGUI” substantially speeds up the task of annotating interictal discharges using a custom-built algorithm based on a combination of template matching and online machine learning techniques. While the algorithm is currently tailored to annotation of interictal epileptiform discharges, it can easily be generalized to other waveforms and signal types. PMID:25570976
EEG-based driver fatigue detection using hybrid deep generic model.
Phyo Phyo San; Sai Ho Ling; Rifai Chai; Tran, Yvonne; Craig, Ashley; Hung Nguyen
2016-08-01
Classification of electroencephalography (EEG)-based application is one of the important process for biomedical engineering. Driver fatigue is a major case of traffic accidents worldwide and considered as a significant problem in recent decades. In this paper, a hybrid deep generic model (DGM)-based support vector machine is proposed for accurate detection of driver fatigue. Traditionally, a probabilistic DGM with deep architecture is quite good at learning invariant features, but it is not always optimal for classification due to its trainable parameters are in the middle layer. Alternatively, Support Vector Machine (SVM) itself is unable to learn complicated invariance, but produces good decision surface when applied to well-behaved features. Consolidating unsupervised high-level feature extraction techniques, DGM and SVM classification makes the integrated framework stronger and enhance mutually in feature extraction and classification. The experimental results showed that the proposed DBN-based driver fatigue monitoring system achieves better testing accuracy of 73.29 % with 91.10 % sensitivity and 55.48 % specificity. In short, the proposed hybrid DGM-based SVM is an effective method for the detection of driver fatigue in EEG.
Jiang, Yizhang; Wu, Dongrui; Deng, Zhaohong; Qian, Pengjiang; Wang, Jun; Wang, Guanjin; Chung, Fu-Lai; Choi, Kup-Sze; Wang, Shitong
2017-12-01
Recognition of epileptic seizures from offline EEG signals is very important in clinical diagnosis of epilepsy. Compared with manual labeling of EEG signals by doctors, machine learning approaches can be faster and more consistent. However, the classification accuracy is usually not satisfactory for two main reasons: the distributions of the data used for training and testing may be different, and the amount of training data may not be enough. In addition, most machine learning approaches generate black-box models that are difficult to interpret. In this paper, we integrate transductive transfer learning, semi-supervised learning and TSK fuzzy system to tackle these three problems. More specifically, we use transfer learning to reduce the discrepancy in data distribution between the training and testing data, employ semi-supervised learning to use the unlabeled testing data to remedy the shortage of training data, and adopt TSK fuzzy system to increase model interpretability. Two learning algorithms are proposed to train the system. Our experimental results show that the proposed approaches can achieve better performance than many state-of-the-art seizure classification algorithms.
EEG Responses to Auditory Stimuli for Automatic Affect Recognition
Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin
2016-01-01
Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410
Louis, A. K.
2006-01-01
Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
[Expressive language disorder and focal paroxysmal activity].
Valdizán, José R; Rodríguez-Mena, Diego; Díaz-Sardi, Mauricio
2011-03-01
In cases of expressive language disorder (ELD), the child is unable to put his or her thoughts into words. Comorbidity is present with difficulties in repeating, imitating or naming. There are no problems with pronunciation, as occurs in phonological disorder, it may present before the age of three years and is crucial between four and seven years of age. Electroencephalogram (EEG) studies have been carried out not only in ELD, but also in clinical pictures where the language disorder was the main symptom or was associated to another neurodevelopmental pathology. We conducted a retrospective study involving a review of 100 patient records, with patients (25 girls and 75 boys) aged between two and six years old who had been diagnosed with ELD (according to the Diagnostic and Statistical Manual of Mental Disorders, fourth edition, text revised) and were free of seizures and not receiving treatment. They were submitted to an EEG and received treatment with valproic acid if EEG findings were positive. Only six patients (males) presented localised spike-wave paroxysmal EEG activity in the frontotemporal region. This 6% is a percentage that is higher than the one found in the normal children's population (2%), but lower than the value indicated in the literature for language disorders, which ranges between 20% and 50%. These patients responded positively to the treatment and both expressive language and EEG findings improved. It is possible that in ELD without paroxysms there may be a dysfunction in the circuit made up of the motor cortex-neostriatum prior to grammatical learning, whereas if there are paroxysms then this would point to neuronal hyperactivity, perhaps associated to this dysfunction or not, in cortical areas. In our cases valproic acid, together with speech therapy, helped the children to recover their language abilities.
Brankack, J; Stewart, M; Fox, S E
1993-07-02
Single-electrode depth profiles of the hippocampal EEG were made in urethane-anesthetized rats and rats trained in an alternating running/drinking task. Current source density (CSD) was computed from the voltage as a function of depth. A problem inherent to AC-coupled profiles was eliminated by incorporating sustained potential components of the EEG. 'AC' profiles force phasic current sinks to alternate with current sources at each lamina, changing the magnitude and even the sign of the computed membrane current. It was possible to include DC potentials in the profiles from anesthetized rats by using glass micropipettes for recording. A method of 'subtracting' profiles of the non-theta EEG from theta profiles was developed as an approach to including sustained potentials in recordings from freely-moving animals implanted with platinum electrodes. 'DC' profiles are superior to 'AC' profiles for analysis of EEG activity because 'DC'-CSD values can be considered correct in sign and more closely represent the actual membrane current magnitudes. Since hippocampal inputs are laminated, CSD analysis leads to straightforward predictions of the afferents involved. Theta-related activity in afferents from entorhinal neurons, hippocampal interneurons and ipsi- and contralateral hippocampal pyramids all appear to contribute to sources and sinks in CA1 and the dentate area. The largest theta-related generator was a sink at the fissure, having both phasic and tonic components. This sink may reflect activity in afferents from the lateral entorhinal cortex. The phase of the dentate mid-molecular sink suggests that medial entorhinal afferents drive the theta-related granule and pyramidal cell firing. The sustained components may be simply due to different average rates of firing during theta rhythm than during non-theta EEG in afferents whose firing rates are also phasically modulated.
Buchner, H; Ferbert, A
2016-02-01
Principally, in the fourth update of the rules for the procedure to finally determine the irreversible cessation of function of the cerebrum, the cerebellum and the brainstem, the importance of an electroencephalogram (EEG), somatosensory evoked potentials (SEP) and brainstem auditory evoked potentials (BAEP) are confirmed. This paper presents the reliability and validity of the electrophysiological diagnosis, discusses the amendments in the fourth version of the guidelines and introduces the practical application, problems and sources of error.An EEG is the best established supplementary diagnostic method for determining the irreversibility of clinical brain death syndrome. It should be noted that residual brain activity can often persist for many hours after the onset of brain death syndrome, particularly in patients with primary brainstem lesions. The derivation and analysis of an EEG requires a high level of expertise to be able to safely distinguish artefacts from primary brain activity. The registration of EEGs to demonstrate the irreversibility of clinical brain death syndrome is extremely time consuming.The BAEPs can only be used to confirm the irreversibility of brain death syndrome in serial examinations or in the rare cases of a sustained wave I or sustained waves I and II. Very often, an investigation cannot be reliably performed because of existing sound conduction disturbances or failure of all potentials even before the onset of clinical brain death syndrome. This explains why BAEPs are only used in exceptional cases.The SEPs of the median nerve can be very reliably derived, are technically simple and with few sources of error. A serial investigation is not required and the time needed for examination is short. For these reasons SEPs are given preference over EEGs and BAEPs for establishing the irreversibility of clinical brain death syndrome.
Khodayari-Rostamabad, Ahmad; Reilly, James P; Hasey, Gary M; de Bruin, Hubert; Maccrimmon, Duncan J
2013-10-01
The problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD). A relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a "leave-n-out" randomized permutation cross-validation procedure. A list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%]. These results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG. The proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Automatic identification of epileptic seizures from EEG signals using linear programming boosting.
Hassan, Ahnaf Rashik; Subasi, Abdulhamit
2016-11-01
Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and expedite epilepsy diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.
Engemann, Denis A; Gramfort, Alexandre
2015-03-01
Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. Copyright © 2015 Elsevier Inc. All rights reserved.
Frechet derivatives for shallow water ocean acoustic inverse problems
NASA Astrophysics Data System (ADS)
Odom, Robert I.
2003-04-01
For any inverse problem, finding a model fitting the data is only half the problem. Most inverse problems of interest in ocean acoustics yield nonunique model solutions, and involve inevitable trade-offs between model and data resolution and variance. Problems of uniqueness and resolution and variance trade-offs can be addressed by examining the Frechet derivatives of the model-data functional with respect to the model variables. Tarantola [Inverse Problem Theory (Elsevier, Amsterdam, 1987), p. 613] published analytical formulas for the basic derivatives, e.g., derivatives of pressure with respect to elastic moduli and density. Other derivatives of interest, such as the derivative of transmission loss with respect to attenuation, can be easily constructed using the chain rule. For a range independent medium the analytical formulas involve only the Green's function and the vertical derivative of the Green's function for the medium. A crucial advantage of the analytical formulas for the Frechet derivatives over numerical differencing is that they can be computed with a single pass of any program which supplies the Green's function. Various derivatives of interest in shallow water ocean acoustics are presented and illustrated by an application to the sensitivity of measured pressure to shallow water sediment properties. [Work supported by ONR.
NASA Astrophysics Data System (ADS)
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.
NASA Astrophysics Data System (ADS)
Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto
2016-08-01
We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.
Rakshasbhuvankar, Abhijeet; Rao, Shripada; Palumbo, Linda; Ghosh, Soumya; Nagarajan, Lakshmi
2017-08-01
This diagnostic accuracy study compared the accuracy of seizure detection by amplitude-integrated electroencephalography with the criterion standard conventional video EEG in term and near-term infants at risk of seizures. Simultaneous recording of amplitude-integrated EEG (2-channel amplitude-integrated EEG with raw trace) and video EEG was done for 24 hours for each infant. Amplitude-integrated EEG was interpreted by a neonatologist; video EEG was interpreted by a neurologist independently. Thirty-five infants were included in the analysis. In the 7 infants with seizures on video EEG, there were 169 seizure episodes on video EEG, of which only 57 were identified by amplitude-integrated EEG. Amplitude-integrated EEG had a sensitivity of 33.7% for individual seizure detection. Amplitude-integrated EEG had an 86% sensitivity for detection of babies with seizures; however, it was nonspecific, in that 50% of infants with seizures detected by amplitude-integrated EEG did not have true seizures by video EEG. In conclusion, our study suggests that amplitude-integrated EEG is a poor screening tool for neonatal seizures.
Anticonvulsant serotonergic and deep brain stimulation in anterior thalamus.
Mirski, Marek A; Ziai, Wendy C; Chiang, Jason; Hinich, Melvin; Sherman, David
2009-01-01
Anterior thalamus (AN) has been shown to mediate seizures in both focal and generalized models. Specific regional increase in AN serotonergic activity was observed following AN-DBS in our pentylenetetrazol (PTZ) rodent model of acute seizures, and this increase may inhibit seizures and contribute to the mechanism of anticonvulsant DBS. Anesthetized rats with AN-directed dialysis cannula with scalp/depth EEG were infused with PTZ at 5.5mg/(kg min) until an EEG seizure occurred. Eight experimental groups of AN-dialysis infusion were evaluated: controls (dialysate-only), 10 and 100 microM serotonin 5-HT(7) agonist 5-carboxamidotryptamine (5-CT), 1, 10 and 100 microM serotonin antagonist methysergide (METH), AN-DBS, and 100 microM METH+AN-DBS. Latency for seizures in control animals was 3,120+/-770 s (S.D.); AN-DBS delayed onset to 5018+/-1100 (p<0.01). AN-directed 5-CT increased latency in dose-dependent fashion: 3890+/-430 and 4247+/-528 (p<0.05). Methysergide had an unexpected protective effect at low-dose (3908+/-550, p<0.05) but not at 100 microM (2687+/-1079). The anticonvulsant action of AN-DBS was blocked by prior dialysis using 100 microM METH. Surface EEG burst count and nonlinear analysis (H-Statistic) noted significant (p<0.05) increased pre-ictal epileptiform bursts in 5-CT, methysergide, but not DBS group compared to control. Increased serotonergic activity in AN raised PTZ seizure threshold, similar to DBS, but without preventing cortical bursting. 5-Carboxamidotryptamine, a 5-HT(7) agonist, demonstrated dose-dependent seizure inhibition. Methysergide proved to have an inverse, dose-dependent agonist property, antagonizing the action of AN-DBS at the highest dose. Anticonvulsant AN-DBS may in part act to selectively alter serotonin neurotransmission to raise seizure threshold.
Łacka, Katarzyna; Florczak, Jolanta; Gradecka-Kubik, Ilona; Rajewska, Justyna; Junik, Roman
2010-03-01
Lack of thyroid hormones in the womb and the first years of life causes changes in the nervous system and mental retardation. The aim of the study was to assess changes in peripheral and central nervous system in 29 adult patients with primary congenital hypothyroidism (PCH) depending on the cause of the disease and systematic treatment of L-thyroxine. The analysis was performed in 29 adult patients with PCH (16 women, 13 men) on the basis of the results of neurological examination, EEG, SPECT (Computer tomography single photon emission) of the brain. Changes in the nervous system were found in 72% of respondents. Patients who had implemented replacement therapy L-thyroxine after completing 12 months of age showed the most neurological disorders. There were variations in the cranial nerves III, IX, IV and VI. In 34% of respondents revealed paraneoplastic cerebellar symptoms, while the pyramid, and extrapyramidal symptoms in 10% and 3% of the people. EEG showed changes in brain bioelectrical activity in the entire study group. In the 83% found a significant asymmetry in regional cerebral blood flow (rCBF). Hypoperfusion outbreak occurred mainly in the stands and leading occipital. The relationship between time of initiation of treatment, and the presence of a systematic change in the nervous system was inversely proportional. In turn, analyzing the causes of most PCH deviations were found in the nervous system in patients with athyreosis. Brain SPECT study in these patients confirmed the organic changes in brain development. CONCLUSIONS. The presence and extent of changes in peripheral and central nervous system depends on the cause PCH, pending the implementation of L-thyroxine treatment and systematic. Studies of brain SPECT and EEG confirmed the existence of developmental changes of the brain in patients with PCH.
Brain, music, and non-Poisson renewal processes
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo
2007-06-01
In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5<α<1 .] The second step rests on the adoption of AE, which shows that these are renewal processes. We show that the stretched exponential, due to its renewal character, is the emerging tip of an iceberg, whose underwater part has slow tails with an inverse power law structure with power index μ=1+α . Adopting the AE procedure we find that both EEG and music composition yield μ<2 . On the basis of the recently discovered complexity matching effect, according to which a complex system S with μS<2 responds only to a complex driving signal P with μP⩽μS , we conclude that the results of our analysis may explain the influence of music on the human brain.
An approach to quantum-computational hydrologic inverse analysis
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less
An approach to quantum-computational hydrologic inverse analysis.
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.
Coupling of Large Amplitude Inversion with Other States
NASA Astrophysics Data System (ADS)
Pearson, John; Yu, Shanshan
2016-06-01
The coupling of a large amplitude motion with a small amplitude vibration remains one of the least well characterized problems in molecular physics. Molecular inversion poses a few unique and not intuitively obvious challenges to the large amplitude motion problem. In spite of several decades of theoretical work numerous challenges in calculation of transition frequencies and more importantly intensities persist. The most challenging aspect of this problem is that the inversion coordinate is a unique function of the overall vibrational state including both the large and small amplitude modes. As a result, the r-axis system and the meaning of the K-quantum number in the rotational basis set are unique to each vibrational state of large or small amplitude motion. This unfortunate reality has profound consequences to calculation of intensities and the coupling of nearly degenerate vibrational states. The case of NH3 inversion and inversion through a plane of symmetry in alcohols will be examined to find a general path forward.
An approach to quantum-computational hydrologic inverse analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Daniel
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2014-06-01
A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia
2018-01-01
EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings.
Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens
2018-01-01
Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Discussion Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings. PMID:29492336
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
Control and System Theory, Optimization, Inverse and Ill-Posed Problems
1988-09-14
Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
Genetics Home Reference: Koolen-de Vries syndrome
... of Koolen-de Vries syndrome , has undergone an inversion . An inversion involves two breaks in a chromosome; the resulting ... lineage have no health problems related to the inversion. However, genetic material can be lost or duplicated ...
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
NASA Astrophysics Data System (ADS)
Wu, Jianping; Geng, Xianguo
2017-12-01
The inverse scattering transform of the coupled modified Korteweg-de Vries equation is studied by the Riemann-Hilbert approach. In the direct scattering process, the spectral analysis of the Lax pair is performed, from which a Riemann-Hilbert problem is established for the equation. In the inverse scattering process, by solving Riemann-Hilbert problems corresponding to the reflectionless cases, three types of multi-soliton solutions are obtained. The multi-soliton classification is based on the zero structures of the Riemann-Hilbert problem. In addition, some figures are given to illustrate the soliton characteristics of the coupled modified Korteweg-de Vries equation.
Individual differences in children's understanding of inversion and arithmetical skill.
Gilmore, Camilla K; Bryant, Peter
2006-06-01
Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.
NASA Astrophysics Data System (ADS)
Kamynin, V. L.; Bukharova, T. I.
2017-01-01
We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
ON THE GEOSTATISTICAL APPROACH TO THE INVERSE PROBLEM. (R825689C037)
The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statist...
On a local solvability and stability of the inverse transmission eigenvalue problem
NASA Astrophysics Data System (ADS)
Bondarenko, Natalia; Buterin, Sergey
2017-11-01
We prove a local solvability and stability of the inverse transmission eigenvalue problem posed by McLaughlin and Polyakov (1994 J. Diff. Equ. 107 351-82). In particular, this result establishes the minimality of the data used therein. The proof is constructive.
Cosandier-Rimélé, D; Ramantani, G; Zentner, J; Schulze-Bonhage, A; Dümpelmann, M
2017-10-01
Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
NASA Astrophysics Data System (ADS)
Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.
2017-10-01
Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.
NLSE: Parameter-Based Inversion Algorithm
NASA Astrophysics Data System (ADS)
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
Zhukovsky, K
2014-01-01
We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.
A comparison of continuous video-EEG monitoring and 30-minute EEG in an ICU.
Khan, Omar I; Azevedo, Christina J; Hartshorn, Alendia L; Montanye, Justin T; Gonzalez, Juan C; Natola, Mark A; Surgenor, Stephen D; Morse, Richard P; Nordgren, Richard E; Bujarski, Krzysztof A; Holmes, Gregory L; Jobst, Barbara C; Scott, Rod C; Thadani, Vijay M
2014-12-01
To determine whether there is added benefit in detecting electrographic abnormalities from 16-24 hours of continuous video-EEG in adult medical/surgical ICU patients, compared to a 30-minute EEG. This was a prospectively enroled non-randomized study of 130 consecutive ICU patients for whom EEG was requested. For 117 patients, a 30-minute EEG was requested for altered mental state and/or suspected seizures; 83 patients continued with continuous video-EEG for 16-24 hours and 34 patients had only the 30-minute EEG. For 13 patients with prior seizures, continuous video-EEG was requested and was carried out for 16-24 hours. We gathered EEG data prospectively, and reviewed the medical records retrospectively to assess the impact of continuous video-EEG. A total of 83 continuous video-EEG recordings were performed for 16-24 hours beyond 30 minutes of routine EEG. All were slow, and 34% showed epileptiform findings in the first 30 minutes, including 2% with seizures. Over 16-24 hours, 14% developed new or additional epileptiform abnormalities, including 6% with seizures. In 8%, treatment was changed based on continuous video-EEG. Among the 34 EEGs limited to 30 minutes, almost all were slow and 18% showed epileptiform activity, including 3% with seizures. Among the 13 patients with known seizures, continuous video-EEG was slow in all and 69% had epileptiform abnormalities in the first 30 minutes, including 31% with seizures. An additional 8% developed epileptiform abnormalities over 16-24 hours. In 46%, treatment was changed based on continuous video-EEG. This study indicates that if continuous video-EEG is not available, a 30-minute EEG in the ICU has a substantial diagnostic yield and will lead to the detection of the majority of epileptiform abnormalities. In a small percentage of patients, continuous video-EEG will lead to the detection of additional epileptiform abnormalities. In a sub-population, with a history of seizures prior to the initiation of EEG recording, the benefits of continuous video-EEG in monitoring seizure activity and influencing treatment may be greater.
Solving inversion problems with neural networks
NASA Technical Reports Server (NTRS)
Kamgar-Parsi, Behzad; Gualtieri, J. A.
1990-01-01
A class of inverse problems in remote sensing can be characterized by Q = F(x), where F is a nonlinear and noninvertible (or hard to invert) operator, and the objective is to infer the unknowns, x, from the observed quantities, Q. Since the number of observations is usually greater than the number of unknowns, these problems are formulated as optimization problems, which can be solved by a variety of techniques. The feasibility of neural networks for solving such problems is presently investigated. As an example, the problem of finding the atmospheric ozone profile from measured ultraviolet radiances is studied.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
Inverse problems and coherence
NASA Astrophysics Data System (ADS)
Baltes, H. P.; Ferwerda, H. A.
1981-03-01
A summary of current inverse problems of statistical optics is presented together with a short guide to the pertinent review-type literature. The retrieval of structural information from the far-zone degree of coherence and the average intensity distribution of radiation scattered by a superposition of random and periodic scatterers is discussed.
Inverse transport problems in quantitative PAT for molecular imaging
NASA Astrophysics Data System (ADS)
Ren, Kui; Zhang, Rongting; Zhong, Yimin
2015-12-01
Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.
Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less
SIAM conference on inverse problems: Geophysical applications. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-12-31
This conference was the second in a series devoted to a particular area of inverse problems. The theme of this series is to discuss problems of major scientific importance in a specific area from a mathematical perspective. The theme of this symposium was geophysical applications. In putting together the program we tried to include a wide range of mathematical scientists and to interpret geophysics in as broad a sense as possible. Our speaker came from industry, government laboratories, and diverse departments in academia. We managed to attract a geographically diverse audience with participation from five continents. There were talks devotedmore » to seismology, hydrology, determination of the earth`s interior on a global scale as well as oceanographic and atmospheric inverse problems.« less
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Inverse problem of the vibrational band gap of periodically supported beam
NASA Astrophysics Data System (ADS)
Shi, Xiaona; Shu, Haisheng; Dong, Fuzhen; Zhao, Lei
2017-04-01
The researches of periodic structures have a long history with the main contents confined in the field of forward problem. In this paper, the inverse problem is considered and an overall frame is proposed which includes two main stages, i.e., the band gap criterion and its optimization. As a preliminary investigation, the inverse problem of the flexural vibrational band gap of a periodically supported beam is analyzed. According to existing knowledge of its forward problem, the band gap criterion is given in implicit form. Then, two cases with three independent parameters, namely the double supported case and the triple one, are studied in detail and the explicit expressions of the feasible domain are constructed by numerical fitting. Finally, the parameter optimization of the double supported case with three variables is conducted using genetic algorithm aiming for the best mean attenuation within specified frequency band.
2013-01-01
There has been a dramatic change in hospital care of cardiac arrest survivors in recent years, including the use of target temperature management (hypothermia). Clinical signs of recovery or deterioration, which previously could be observed, are now concealed by sedation, analgesia, and muscle paralysis. Seizures are common after cardiac arrest, but few centers can offer high-quality electroencephalography (EEG) monitoring around the clock. This is due primarily to its complexity and lack of resources but also to uncertainty regarding the clinical value of monitoring EEG and of treating post-ischemic electrographic seizures. Thanks to technical advances in recent years, EEG monitoring has become more available. Large amounts of EEG data can be linked within a hospital or between neighboring hospitals for expert opinion. Continuous EEG (cEEG) monitoring provides dynamic information and can be used to assess the evolution of EEG patterns and to detect seizures. cEEG can be made more simple by reducing the number of electrodes and by adding trend analysis to the original EEG curves. In our version of simplified cEEG, we combine a reduced montage, displaying two channels of the original EEG, with amplitude-integrated EEG trend curves (aEEG). This is a convenient method to monitor cerebral function in comatose patients after cardiac arrest but has yet to be validated against the gold standard, a multichannel cEEG. We recently proposed a simplified system for interpreting EEG rhythms after cardiac arrest, defining four major EEG patterns. In this topical review, we will discuss cEEG to monitor brain function after cardiac arrest in general and how a simplified cEEG, with a reduced number of electrodes and trend analysis, may facilitate and improve care. PMID:23876221
USDA-ARS?s Scientific Manuscript database
To determine the influence of a morning meal on complex mental functions in children (8-11 y), time-frequency analyses were applied to electroencephalographic (EEG) activity recorded while children solved simple addition problems after an overnight fast and again after having either eaten or skipped...
USDA-ARS?s Scientific Manuscript database
Are there effects of morning nutrition on brain functions important for learning and performance in children? We used time-frequency analyses of EEG activity recorded while children solved simple math problems to study how brain processes were influenced by eating or skipping breakfast. Participants...