Preconditioned Krylov subspace methods for eigenvalue problems
Wu, Kesheng; Saad, Y.; Stathopoulos, A.
1996-12-31
Lanczos algorithm is a commonly used method for finding a few extreme eigenvalues of symmetric matrices. It is effective if the wanted eigenvalues have large relative separations. If separations are small, several alternatives are often used, including the shift-invert Lanczos method, the preconditioned Lanczos method, and Davidson method. The shift-invert Lanczos method requires direct factorization of the matrix, which is often impractical if the matrix is large. In these cases preconditioned schemes are preferred. Many applications require solution of hundreds or thousands of eigenvalues of large sparse matrices, which pose serious challenges for both iterative eigenvalue solver and preconditioner. In this paper we will explore several preconditioned eigenvalue solvers and identify the ones suited for finding large number of eigenvalues. Methods discussed in this paper make up the core of a preconditioned eigenvalue toolkit under construction.
Preserving Symmetry in Preconditioned Krylov Subspace Methods
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Chow, E.; Saad, Y.; Yeung, M. C.
1996-01-01
We consider the problem of solving a linear system Ax = b when A is nearly symmetric and when the system is preconditioned by a symmetric positive definite matrix M. In the symmetric case, one can recover symmetry by using M-inner products in the conjugate gradient (CG) algorithm. This idea can also be used in the nonsymmetric case, and near symmetry can be preserved similarly. Like CG, the new algorithms are mathematically equivalent to split preconditioning, but do not require M to be factored. Better robustness in a specific sense can also be observed. When combined with truncated versions of iterative methods, tests show that this is more effective than the common practice of forfeiting near-symmetry altogether.
Pipelined Flexible Krylov Subspace Methods
NASA Astrophysics Data System (ADS)
Sanan, Patrick; Schnepp, Sascha M.; May, Dave A.
2015-04-01
State-of-the-art geophysical forward models expend most of their computational resources solving large, sparse linear systems. To date, preconditioned Krylov subspace methods have proven to be the only algorithmically scalable approach to solving these systems. However, at `extreme scale', the global reductions required by the inner products within these algorithms become a computational bottleneck, and it becomes advantageous to use pipelined Krylov subspace methods. These allow overlap of global reductions with other work, at the expense of using more storage and local computational effort, including overhead required to synchronize overlapping work. An impediment to using currently-available pipelined solvers for relevant geophysical forward modeling is that they are not `flexible', meaning that they cannot support nonlinear or varying preconditioners. Such preconditioners are effective for solving challenging linear systems, notably those arising from modelling of Stokes flow with highly heterogeneous viscosity structure. To this end, we introduce, for the first time, Krylov subspace methods which are both pipelined and flexible. We implement and demonstrate pipelined, flexible Conjugate Gradient, GMRES, and Conjugate Residual methods, which will be made publicly available via the open source PETSc library. Our algorithms are nontrivial modifications of the flexible methods they are based on (that is, they are not equivalent in exact arithmetic), so we analyze them mathematically and through a number of numerical experiments employing multi-level preconditioners. We highlight the benefits of these algorithms by solving variable viscosity Stokes problems directly relevant to lithospheric dynamics.
Krylov subspace methods on supercomputers
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.
NASA Astrophysics Data System (ADS)
Li, Liang; Huang, Ting-Zhu; Jing, Yan-Fei; Zhang, Yong
2010-02-01
The incomplete Cholesky (IC) factorization preconditioning technique is applied to the Krylov subspace methods for solving large systems of linear equations resulted from the use of edge-based finite element method (FEM). The construction of the preconditioner is based on the fact that the coefficient matrix is represented in an upper triangular compressed sparse row (CSR) form. An efficient implementation of the IC factorization is described in detail for complex symmetric matrices. With some ordering schemes our IC algorithm can greatly reduce the memory requirement as well as the iteration numbers. Numerical tests on harmonic analysis for plane wave scattering from a metallic plate and a metallic sphere coated by a lossy dielectric layer show the efficiency of this method.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Krylov-subspace acceleration of time periodic waveform relaxation
Lumsdaine, A.
1994-12-31
In this paper the author uses Krylov-subspace techniques to accelerate the convergence of waveform relaxation applied to solving systems of first order time periodic ordinary differential equations. He considers the problem in the frequency domain and presents frequency dependent waveform GMRES (FDWGMRES), a member of a new class of frequency dependent Krylov-subspace techniques. FDWGMRES exhibits many desirable properties, including finite termination independent of the number of timesteps and, for certain problems, a convergence rate which is bounded from above by the convergence rate of GMRES applied to the static matrix problem corresponding to the linear time-invariant ODE.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
Krylov subspace iteration for eigenvalue response matrix calculations
Roberts, J. A.; Forget, B.
2012-07-01
Recent work has revisited the eigenvalue response matrix method as an approach for reactor core analyses. In its most straightforward form, the method consists of a two-level Eigen problem. An outer Picard iteration updates the k-eigenvalue, while the inner Eigen problem imposes current continuity between coarse meshes. In this paper, several Eigen solvers are evaluated for this inner problem, using several 2-D diffusion benchmarks as test cases. The results indicate both the explicitly-restarted Arnoldi and the Krylov-Schur methods are up to an order of magnitude more efficient than power iteration. This increased efficiency makes the nested eigenvalue formulation more effective than the ILU-preconditioned Newton-Krylov formulation previously studied. (authors)
Application of Block Krylov Subspace Spectral Methods to Maxwell's Equations
Lambers, James V.
2009-10-08
Ever since its introduction by Kane Yee over forty years ago, the finite-difference time-domain (FDTD) method has been a widely-used technique for solving the time-dependent Maxwell's equations. This paper presents an alternative approach to these equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. These methods have previously been applied to the variable-coefficient heat equation and wave equation, and have demonstrated high-order accuracy, as well as stability characteristic of implicit time-stepping schemes, even though KSS methods are explicit. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gerard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.
A review of block Krylov subspace methods for multisource electromagnetic modelling
NASA Astrophysics Data System (ADS)
Puzyrev, Vladimir; Cela, José María
2015-08-01
Practical applications of controlled-source electromagnetic (EM) modelling require solutions for multiple sources at several frequencies, thus leading to a dramatic increase of the computational cost. In this paper, we present an approach using block Krylov subspace solvers that are iterative methods especially designed for problems with multiple right-hand sides (RHS). Their main advantage is the shared subspace for approximate solutions, hence, these methods are expected to converge in less iterations than the corresponding standard solver applied to each linear system. Block solvers also share the same preconditioner, which is constructed only once. Simultaneously computed block operations have better utilization of cache due to the less frequent access to the system matrix. In this paper, we implement two different block solvers for sparse matrices resulting from the finite-difference and the finite-element discretizations, discuss the computational cost of the algorithms and study their dependence on the number of RHS given at once. The effectiveness of the proposed methods is demonstrated on two EM survey scenarios, including a large marine model. As the results of the simulations show, when a powerful preconditioning is employed, block methods are faster than standard iterative techniques in terms of both iterations and time.
Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers
Pernice, M.
1994-12-31
Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.
Recycling Krylov subspaces for CFD applications and a new hybrid recycling solver
NASA Astrophysics Data System (ADS)
Amritkar, Amit; de Sturler, Eric; Świrydowicz, Katarzyna; Tafti, Danesh; Ahuja, Kapil
2015-12-01
We focus on robust and efficient iterative solvers for the pressure Poisson equation in incompressible Navier-Stokes problems. Preconditioned Krylov subspace methods are popular for these problems, with BiCGStab and GMRES(m) most frequently used for nonsymmetric systems. BiCGStab is popular because it has cheap iterations, but it may fail for stiff problems, especially early on as the initial guess is far from the solution. Restarted GMRES is better, more robust, in this phase, but restarting may lead to very slow convergence. Therefore, we evaluate the rGCROT method for these systems. This method recycles a selected subspace of the search space (called recycle space) after a restart. This generally improves the convergence drastically compared with GMRES(m). Recycling subspaces is also advantageous for subsequent linear systems, if the matrix changes slowly or is constant. However, rGCROT iterations are still expensive in memory and computation time compared with those of BiCGStab. Hence, we propose a new, hybrid approach that combines the cheap iterations of BiCGStab with the robustness of rGCROT. For the first few time steps the algorithm uses rGCROT and builds an effective recycle space, and then it recycles that space in the rBiCGStab solver. We evaluate rGCROT on a turbulent channel flow problem, and we evaluate both rGCROT and the new, hybrid combination of rGCROT and rBiCGStab on a porous medium flow problem. We see substantial performance gains for both the problems.
Druskin, V.; Lee, Ping; Knizhnerman, L.
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Krylov-subspace recycling via the POD-augmented conjugate-gradient method
Carlberg, Kevin Thomas; Forstall, Virginia; Tuminaro, Raymond S.
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we propose specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.
Krylov-subspace recycling via the POD-augmented conjugate-gradient method
Carlberg, Kevin Thomas; Forstall, Virginia; Tuminaro, Raymond S.
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Linear multifrequency-grey acceleration recast for preconditioned Krylov iterations
Morel, Jim E. Brian Yang, T.-Y.; Warsa, James S.
2007-11-10
The linear multifrequency-grey acceleration (LMFGA) technique is used to accelerate the iterative convergence of multigroup thermal radiation diffusion calculations in high energy density simulations. Although it is effective and efficient in one-dimensional calculations, the LMFGA method has recently been observed to significantly degrade under certain conditions in multidimensional calculations with large discontinuities in material properties. To address this deficiency, we recast the LMFGA method in terms of a preconditioned system that is solved with a Krylov method (LMFGK). Results are presented demonstrating that the new LMFGK method always requires fewer iterations than the original LMFGA method. The reduction in iteration count increases with both the size of the time step and the inhomogeneity of the problem. However, for reasons later explained, the LMFGK method can cost more per iteration than the LMFGA method, resulting in lower but comparable efficiency in problems with small time steps and weak inhomogeneities. In problems with large time steps and strong inhomogeneities, the LMFGK method is significantly more efficient than the LMFGA method.
A subspace preconditioning algorithm for eigenvector/eigenvalue computation
Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.
1996-12-31
We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.
A hierarchical Krylov-Bayes iterative inverse solver for MEG with physiological preconditioning
NASA Astrophysics Data System (ADS)
Calvetti, D.; Pascarella, A.; Pitolli, F.; Somersalo, E.; Vantaggi, B.
2015-12-01
The inverse problem of MEG aims at estimating electromagnetic cerebral activity from measurements of the magnetic fields outside the head. After formulating the problem within the Bayesian framework, a hierarchical conditionally Gaussian prior model is introduced, including a physiologically inspired prior model that takes into account the preferred directions of the source currents. The hyperparameter vector consists of prior variances of the dipole moments, assumed to follow a non-conjugate gamma distribution with variable scaling and shape parameters. A point estimate of both dipole moments and their variances can be computed using an iterative alternating sequential updating algorithm, which is shown to be globally convergent. The numerical solution is based on computing an approximation of the dipole moments using a Krylov subspace iterative linear solver equipped with statistically inspired preconditioning and a suitable termination rule. The shape parameters of the model are shown to control the focality, and furthermore, using an empirical Bayes argument, it is shown that the scaling parameters can be naturally adjusted to provide a statistically well justified depth sensitivity scaling. The validity of this interpretation is verified through computed numerical examples. Also, a computed example showing the applicability of the algorithm to analyze realistic time series data is presented.
Krylov subspace algorithms for computing GeneRank for the analysis of microarray data mining.
Wu, Gang; Zhang, Ying; Wei, Yimin
2010-04-01
GeneRank is a new engine technology for the analysis of microarray experiments. It combines gene expression information with a network structure derived from gene notations or expression profile correlations. Using matrix decomposition techniques, we first give a matrix analysis of the GeneRank model. We reformulate the GeneRank vector as a linear combination of three parts in the general case when the matrix in question is non-diagonalizable. We then propose two Krylov subspace methods for computing GeneRank. Numerical experiments show that, when the GeneRank problem is very large, the new algorithms are appropriate choices. PMID:20426695
Generalization of the residual cutting method based on the Krylov subspace
NASA Astrophysics Data System (ADS)
Abe, Toshihiko; Sekine, Yoshihito; Kikuchi, Kazuo
2016-06-01
The residual cutting (RC) method has been reported to have superior converging characteristics in numerically solving elliptic partial differential equations. However, its application is limited to linear problems with diagonal-dominant matrices in general, for which convergence of a relaxation method such as SOR is guaranteed. In this study, we propose the generalized residual cutting (GRC) method, which is based on the Krylov subspace and applicable to general unsymmetric linear problems. Also, we perform numerical experiments with various coefficient matrices, and show that the GRC method has some desirable properties such as convergence characteristics and memory usage, in comparison to the conventional RC, BiCGSTAB and GMRES methods.
Krylov subspace iterations for the calculation of K-Eigenvalues with sn transport codes
Warsa, J. S.; Wareing, T. A.; Morel, J. E.; McGhee, J. M.; Lehoucq, R. B.
2002-01-01
We apply the Implicitly Restarted Arnoldi Method (IRAM), a Krylov subspace iterative method, to the calculation of k-eigenvalues for criticality problems. We show that the method can be implemented with only modest changes to existing power iteration schemes in an SN transport code. Numerical results on three dimensional unstructured tetrahedral meshes are shown. Although we only compare the IRAM to unaccelerated power iteration, the results indicate that the IRAM is a potentially efficient and powerful technique, especially for problems with dominance ratios approaching unity. Key Words: criticality eigenvalues, Implicitly Restarted Arnoldi Method (IRAM), deterministic transport methods
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Druskin, V.; Knizhnerman, L.
1994-12-31
The authors solve the Cauchy problem for an ODE system Au + {partial_derivative}u/{partial_derivative}t = 0, u{vert_bar}{sub t=0} = {var_phi}, where A is a square real nonnegative definite symmetric matrix of the order N, {var_phi} is a vector from R{sup N}. The stiffness matrix A is obtained due to semi-discretization of a parabolic equation or system with time-independent coefficients. The authors are particularly interested in large stiff 3-D problems for the scalar diffusion and vectorial Maxwell`s equations. First they consider an explicit method in which the solution on a whole time interval is projected on a Krylov subspace originated by A. Then they suggest another Krylov subspace with better approximating properties using powers of an implicit transition operator. These Krylov subspace methods generate optimal in a spectral sense polynomial approximations for the solution of the ODE, similar to CG for SLE.
3D-marine tCSEM inversion using model reduction in the Rational Krylov subspace
NASA Astrophysics Data System (ADS)
Sommer, M.; Jegen, M. D.
2014-12-01
Computationally, the most expensive part of a 3D time domain CSEM inversion is the computation of the Jacobian matrix in every Gauss-Newton step. An other problem is its size for large data sets. We use a model reduction method (Zaslavsky et al, 2013), that compresses the Jacobian by projecting it with a Rational Krylov Subspace (RKS). It also reduces the runtime drastically, compared to the most common adjoint approach and was implemented on GPU.It depends on an analytic derivation of the implicit Anzatz function, which solves Maxwell's diffusion equation in the Eigenspace giving a Jacobian dependent on the Eigenpairs and its derivatives of the forward problem. The Eigenpairs are approximated by Ritz-pairs in the Rational Krylov subspace. Determination of the derivived Ritz-pairs is the most time consuming and was fully GPU-optimized. Furthermore, the amount of inversion cells is reduced by using Octree meshes. The gridding allows for the incorporation of complicated survey geometries, as they are encountered in marine CSEM datasets.As a first result, the Jacobian computation is, even on a Desktop, faster than the most common adjoint approach on a super computer for realistic data sets. We will present careful benchmarking and accuracy tests of the new method and show how it can be applied to a real marine scenario.
Generalization of the residual cutting method based on the Krylov subspace
NASA Astrophysics Data System (ADS)
Abe, Toshihiko; Sekine, Yoshihito; Kikuchi, Kazuo
2016-06-01
The residual cutting (RC) method has been reported to have superior converging characteristics in numerically solving elliptic partial differential equations. However, its application is limited to linear problems with diagonal-dominant matrices in general, for which convergence of a relaxation method such as SOR is guaranteed. In this study, we propose the generalized residual cutting (GRC) method, which is based on the Krylov subspace and applicable to general unsymmetric linear problems. Also, we perform numerical experiments with various coefficient matrices, and show that the GRC method has some desirable properties such as convergence characteristics and memory usage, in comparison to the conventional RC, BiCGSTAB and GMRES methods. At the request of the author of this paper, a corrigendum was issued on 22 June 2016 to correct an error in Eq. (2) and Eq. (3).
Investigation of continuous-time quantum walk by using Krylov subspace-Lanczos algorithm
NASA Astrophysics Data System (ADS)
Jafarizadeh, M. A.; Sufiani, R.; Salimi, S.; Jafarizadeh, S.
2007-09-01
In papers [Jafarizadehn and Salimi, Ann. Phys. 322, 1005 (2007) and J. Phys. A: Math. Gen. 39, 13295 (2006)], the amplitudes of continuous-time quantum walk (CTQW) on graphs possessing quantum decomposition (QD graphs) have been calculated by a new method based on spectral distribution associated with their adjacency matrix. Here in this paper, it is shown that the CTQW on any arbitrary graph can be investigated by spectral analysis method, simply by using Krylov subspace-Lanczos algorithm to generate orthonormal bases of Hilbert space of quantum walk isomorphic to orthogonal polynomials. Also new type of graphs possessing generalized quantum decomposition (GQD) have been introduced, where this is achieved simply by relaxing some of the constrains imposed on QD graphs and it is shown that both in QD and GQD graphs, the unit vectors of strata are identical with the orthonormal basis produced by Lanczos algorithm. Moreover, it is shown that probability amplitude of observing the walk at a given vertex is proportional to its coefficient in the corresponding unit vector of its stratum, and it can be written in terms of the amplitude of its stratum. The capability of Lanczos-based algorithm for evaluation of CTQW on graphs (GQD or non-QD types), has been tested by calculating the probability amplitudes of quantum walk on some interesting finite (infinite) graph of GQD type and finite (infinite) path graph of non-GQD type, where the asymptotic behavior of the probability amplitudes at the limit of the large number of vertices, are in agreement with those of central limit theorem of [Phys. Rev. E 72, 026113 (2005)]. At the end, some applications of the method such as implementation of quantum search algorithms, calculating the resistance between two nodes in regular networks and applications in solid state and condensed matter physics, have been discussed, where in all of them, the Lanczos algorithm, reduces the Hilbert space to some smaller subspaces and the problem is
Radio astronomical image formation using constrained least squares and Krylov subspaces
NASA Astrophysics Data System (ADS)
Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan
2016-04-01
Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources
Chen, G.; Chacón, L.; Leibs, C.A.; Knoll, D.A.; Taitano, W.
2014-02-01
A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solve is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.
NASA Astrophysics Data System (ADS)
Kuprov, Ilya
2008-11-01
We extend the recently proposed state-space restriction (SSR) technique for quantum spin dynamics simulations [Kuprov et al., J. Magn. Reson. 189 (2007) 241-250] to include on-the-fly detection and elimination of unpopulated dimensions from the system density matrix. Further improvements in spin dynamics simulation speed, frequently by several orders of magnitude, are demonstrated. The proposed zero track elimination (ZTE) procedure is computationally inexpensive, reversible, numerically stable and easy to add to any existing simulation code. We demonstrate that it belongs to the same family of Krylov subspace techniques as the well-known Lanczos basis pruning procedure. The combined SSR + ZTE algorithm is recommended for simulations of NMR, EPR and Spin Chemistry experiments on systems containing between 10 and 10 4 coupled spins.
Kuprov, Ilya
2008-11-01
We extend the recently proposed state-space restriction (SSR) technique for quantum spin dynamics simulations [Kuprov et al., J. Magn. Reson. 189 (2007) 241-250] to include on-the-fly detection and elimination of unpopulated dimensions from the system density matrix. Further improvements in spin dynamics simulation speed, frequently by several orders of magnitude, are demonstrated. The proposed zero track elimination (ZTE) procedure is computationally inexpensive, reversible, numerically stable and easy to add to any existing simulation code. We demonstrate that it belongs to the same family of Krylov subspace techniques as the well-known Lanczos basis pruning procedure. The combined SSR+ZTE algorithm is recommended for simulations of NMR, EPR and Spin Chemistry experiments on systems containing between 10 and 10(4) coupled spins.
NASA Astrophysics Data System (ADS)
Recuero, Antonio M.; Escalona, José L.
2013-09-01
This paper presents a procedure that makes use of a particular formulation based on the trajectory coordinate system (TCS) approach, which is specific of ground vehicles, to describe the track deformation by means of a suitable set of mode shapes. The inertia terms of the track elastic displacements are derived using the TCS arc length to couple the system dynamics. The selection of the track modes of deformation is carried out from a finite element model by using Krylov subspaces as the model-order reduction technique. The modes of deformation move along the track fixed to the TCS using the moving modes method (MMM), avoiding the issue concerning the spatial convergence of the load (wheels) on the track and preserving their vertical frequency contents whose accuracy can be chosen beforehand. An unsuspended wheelset with an induced hunting motion moving on flexible and rigid tangent tracks and a vehicle model are simulated using rail defects as excitations sources such that the performance of this procedure using a fully 3D contact algorithm is shown and analyzed.
ODE System Solver W. Krylov Iteration & Rootfinding
1991-09-09
LSODKR is a new initial value ODE solver for stiff and nonstiff systems. It is a variant of the LSODPK and LSODE solvers, intended mainly for large stiff systems. The main differences between LSODKR and LSODE are the following: (a) for stiff systems, LSODKR uses a corrector iteration composed of Newton iteration and one of four preconditioned Krylov subspace iteration methods. The user must supply routines for the preconditioning operations, (b) Within the corrector iteration,more » LSODKR does automatic switching between functional (fixpoint) iteration and modified Newton iteration, (c) LSODKR includes the ability to find roots of given functions of the solution during the integration.« less
Luanjing Guo; Chuan Lu; Hai Huang; Derek R. Gaston
2012-06-01
Systems of multicomponent reactive transport in porous media that are large, highly nonlinear, and tightly coupled due to complex nonlinear reactions and strong solution-media interactions are often described by a system of coupled nonlinear partial differential algebraic equations (PDAEs). A preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach is applied to solve the PDAEs in a fully coupled, fully implicit manner. The advantage of the JFNK method is that it avoids explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations for computational efficiency considerations. This solution approach is also enhanced by physics-based blocking preconditioning and multigrid algorithm for efficient inversion of preconditioners. Based on the solution approach, we have developed a reactive transport simulator named RAT. Numerical results are presented to demonstrate the efficiency and massive scalability of the simulator for reactive transport problems involving strong solution-mineral interactions and fast kinetics. It has been applied to study the highly nonlinearly coupled reactive transport system of a promising in situ environmental remediation that involves urea hydrolysis and calcium carbonate precipitation.
HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.
Luanjing Guo; Hai Huang; Derek Gaston; Cody Permann; David Andrs; George Redden; Chuan Lu; Don Fox; Yoshiko Fujita
2013-03-01
Modeling large multicomponent reactive transport systems in porous media is particularly challenging when the governing partial differential algebraic equations (PDAEs) are highly nonlinear and tightly coupled due to complex nonlinear reactions and strong solution-media interactions. Here we present a preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach to solve the governing PDAEs in a fully coupled and fully implicit manner. A well-known advantage of the JFNK method is that it does not require explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations. Our approach further enhances the JFNK method by utilizing physics-based, block preconditioning and a multigrid algorithm for efficient inversion of the preconditioner. This preconditioning strategy accounts for self- and optionally, cross-coupling between primary variables using diagonal and off-diagonal blocks of an approximate Jacobian, respectively. Numerical results are presented demonstrating the efficiency and massive scalability of the solution strategy for reactive transport problems involving strong solution-mineral interactions and fast kinetics. We found that the physics-based, block preconditioner significantly decreases the number of linear iterations, directly reducing computational cost; and the strongly scalable algebraic multigrid algorithm for approximate inversion of the preconditioner leads to excellent parallel scaling performance.
Starke, G.
1994-12-31
For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Implicit Newton-Krylov methods for modeling blast furnace stoves
Howse, J.W.; Hansen, G.A.; Cagliostro, D.J.; Muske, K.R.
1998-03-01
In this paper the authors discuss the use of an implicit Newton-Krylov method to solve a set of partial differential equations representing a physical model of a blast furnace stove. The blast furnace stove is an integral part of the iron making process in the steel industry. These stoves are used to heat air which is then used in the blast furnace to chemically reduce iron ore to iron metal. The solution technique used to solve the discrete representations of the model and control PDE`s must be robust to linear systems with disparate eigenvalues, and must converge rapidly without using tuning parameters. The disparity in eigenvalues is created by the different time scales for convection in the gas, and conduction in the brick; combined with a difference between the scaling of the model and control PDE`s. A preconditioned implicit Newton-Krylov solution technique was employed. The procedure employs Newton`s method, where the update to the current solution at each stage is computed by solving a linear system. This linear system is obtained by linearizing the discrete approximation to the PDE`s, using a numerical approximation for the Jacobian of the discretized system. This linear system is then solved for the needed update using a preconditioned Krylov subspace projection method.
McHugh, P.R.
1995-10-01
Fully coupled, Newton-Krylov algorithms are investigated for solving strongly coupled, nonlinear systems of partial differential equations arising in the field of computational fluid dynamics. Primitive variable forms of the steady incompressible and compressible Navier-Stokes and energy equations that describe the flow of a laminar Newtonian fluid in two-dimensions are specifically considered. Numerical solutions are obtained by first integrating over discrete finite volumes that compose the computational mesh. The resulting system of nonlinear algebraic equations are linearized using Newton`s method. Preconditioned Krylov subspace based iterative algorithms then solve these linear systems on each Newton iteration. Selected Krylov algorithms include the Arnoldi-based Generalized Minimal RESidual (GMRES) algorithm, and the Lanczos-based Conjugate Gradients Squared (CGS), Bi-CGSTAB, and Transpose-Free Quasi-Minimal Residual (TFQMR) algorithms. Both Incomplete Lower-Upper (ILU) factorization and domain-based additive and multiplicative Schwarz preconditioning strategies are studied. Numerical techniques such as mesh sequencing, adaptive damping, pseudo-transient relaxation, and parameter continuation are used to improve the solution efficiency, while algorithm implementation is simplified using a numerical Jacobian evaluation. The capabilities of standard Newton-Krylov algorithms are demonstrated via solutions to both incompressible and compressible flow problems. Incompressible flow problems include natural convection in an enclosed cavity, and mixed/forced convection past a backward facing step.
Combined incomplete LU and strongly implicit procedure preconditioning
Meese, E.A.
1996-12-31
For the solution of large sparse linear systems of equations, the Krylov-subspace methods have gained great merit. Their efficiency are, however, largely dependent upon preconditioning of the equation-system. A family of matrix factorisations often used for preconditioning, is obtained from a truncated Gaussian elimination, ILU(p). Less common, supposedly due to it`s restriction to certain sparsity patterns, is factorisations generated by the strongly implicit procedure (SIP). The ideas from ILU(p) and SIP are used in this paper to construct a generalized strongly implicit procedure, applicable to matrices with any sparsity pattern. The new algorithm has been run on some test equations, and efficiency improvements over ILU(p) was found.
Krylov subspace acceleration of waveform relaxation
Lumsdaine, A.; Wu, Deyun
1996-12-31
Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Y.; Semeraro, B. D.
1991-01-01
This paper presents an application of matrix exponentiation via Krylov subspace projection, to the solution of fluid dynamics problems. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a Krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2015-10-01
Numerical methods for fractional differential equations generate full stiffness matrices, which were traditionally solved via Gaussian type direct solvers that require O (N3) of computational work and O (N2) of memory to store where N is the number of spatial grid points in the discretization. We develop a preconditioned fast Krylov subspace iterative method for the efficient and faithful solution of finite volume schemes defined on a locally refined composite mesh for fractional differential equations to resolve boundary layers of the solutions. Numerical results are presented to show the utility of the method.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Portable, parallel, reusable Krylov space codes
Smith, B.; Gropp, W.
1994-12-31
Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.
Block-Krylov component synthesis method for structural model reduction
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Hale, Arthur L.
1988-01-01
A new analytical method is presented for generating component shape vectors, or Ritz vectors, for use in component synthesis. Based on the concept of a block-Krylov subspace, easily derived recurrence relations generate blocks of Ritz vectors for each component. The subspace spanned by the Ritz vectors is called a block-Krylov subspace. The synthesis uses the new Ritz vectors rather than component normal modes to reduce the order of large, finite-element component models. An advantage of the Ritz vectors is that they involve significantly less computation than component normal modes. Both 'free-interface' and 'fixed-interface' component models are derived. They yield block-Krylov formulations paralleling the concepts of free-interface and fixed-interface component modal synthesis. Additionally, block-Krylov reduced-order component models are shown to have special disturbability/observability properties. Consequently, the method is attractive in active structural control applications, such as large space structures. The new fixed-interface methodology is demonstrated by a numerical example. The accuracy is found to be comparable to that of fixed-interface component modal synthesis.
Scharz Preconditioners for Krylov Methods: Theory and Practice
Szyld, Daniel B.
2013-05-10
Several numerical methods were produced and analyzed. The main thrust of the work relates to inexact Krylov subspace methods for the solution of linear systems of equations arising from the discretization of partial di erential equa- tions. These are iterative methods, i.e., where an approximation is obtained and at each step. Usually, a matrix-vector product is needed at each iteration. In the inexact methods, this product (or the application of a preconditioner) can be done inexactly. Schwarz methods, based on domain decompositions, are excellent preconditioners for thise systems. We contributed towards their under- standing from an algebraic point of view, developed new ones, and studied their performance in the inexact setting. We also worked on combinatorial problems to help de ne the algebraic partition of the domains, with the needed overlap, as well as PDE-constraint optimization using the above-mentioned inexact Krylov subspace methods.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
NASA Astrophysics Data System (ADS)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Lattice QCD computations: Recent progress with modern Krylov subspace methods
Frommer, A.
1996-12-31
Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.
A multigrid Newton-Krylov method for flux-limited radiation diffusion
Rider, W.J.; Knoll, D.A.; Olson, G.L.
1998-09-01
The authors focus on the integration of radiation diffusion including flux-limited diffusion coefficients. The nonlinear integration is accomplished with a Newton-Krylov method preconditioned with a multigrid Picard linearization of the governing equations. They investigate the efficiency of the linear and nonlinear iterative techniques.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Conformal mapping and convergence of Krylov iterations
Driscoll, T.A.; Trefethen, L.N.
1994-12-31
Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.
Krylov methods for compressible flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1995-01-01
We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.
Application of nonlinear Krylov acceleration to radiative transfer problems
Till, A. T.; Adams, M. L.; Morel, J. E.
2013-07-01
The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)
Harris, D B
2006-07-11
Broadband subspace detectors are introduced for seismological applications that require the detection of repetitive sources that produce similar, yet significantly variable seismic signals. Like correlation detectors, of which they are a generalization, subspace detectors often permit remarkably sensitive detection of small events. The subspace detector derives its name from the fact that it projects a sliding window of data drawn from a continuous stream onto a vector signal subspace spanning the collection of signals expected to be generated by a particular source. Empirical procedures are presented for designing subspaces from clusters of events characterizing a source. Furthermore, a solution is presented for the problem of selecting the dimension of the subspace to maximize the probability of detecting repetitive events at a fixed false alarm rate. An example illustrates subspace design and detection using events in the 2002 San Ramon, California earthquake swarm.
Improvements in Block-Krylov Ritz Vectors and the Boundary Flexibility Method of Component Synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly Scott
1997-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, proposed by Wilson, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based upon the boundary flexibility vectors of the component. Improvements have been made in the formulation of the initial seed to the Krylov sequence, through the use of block-filtering. A method to shift the Krylov sequence to create Ritz vectors that will represent the dynamic behavior of the component at target frequencies, the target frequency being determined by the applied forcing functions, has been developed. A method to terminate the Krylov sequence has also been developed. Various orthonormalization schemes have been developed and evaluated, including the Cholesky/QR method. Several auxiliary theorems and proofs which illustrate issues in component mode synthesis and loss of orthogonality in the Krylov sequence have also been presented. The resulting methodology is applicable to both fixed and free- interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. The accuracy is found to be comparable to that of component synthesis based upon normal modes, using fewer generalized coordinates. In addition, the block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem. The requirement for less vectors to form the component, coupled with the lower computational expense of calculating these Ritz vectors, combine to create a method more efficient than traditional component mode synthesis.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
Projection preconditioning for Lanczos-type methods
Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V.
1996-12-31
We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Newton-Raphson preconditioner for Krylov type solvers on GPU devices.
Kushida, Noriyuki
2016-01-01
A new Newton-Raphson method based preconditioner for Krylov type linear equation solvers for GPGPU is developed, and the performance is investigated. Conventional preconditioners improve the convergence of Krylov type solvers, and perform well on CPUs. However, they do not perform well on GPGPUs, because of the complexity of implementing powerful preconditioners. The developed preconditioner is based on the BFGS Hessian matrix approximation technique, which is well known as a robust and fast nonlinear equation solver. Because the Hessian matrix in the BFGS represents the coefficient matrix of a system of linear equations in some sense, the approximated Hessian matrix can be a preconditioner. On the other hand, BFGS is required to store dense matrices and to invert them, which should be avoided on modern computers and supercomputers. To overcome these disadvantages, we therefore introduce a limited memory BFGS, which requires less memory space and less computational effort than the BFGS. In addition, a limited memory BFGS can be implemented with BLAS libraries, which are well optimized for target architectures. There are advantages and disadvantages to the Hessian matrix approximation becoming better as the Krylov solver iteration continues. The preconditioning matrix varies through Krylov solver iterations, and only flexible Krylov solvers can work well with the developed preconditioner. The GCR method, which is a flexible Krylov solver, is employed because of the prevalence of GCR as a Krylov solver with a variable preconditioner. As a result of the performance investigation, the new preconditioner indicates the following benefits: (1) The new preconditioner is robust; i.e., it converges while conventional preconditioners (the diagonal scaling, and the SSOR preconditioners) fail. (2) In the best case scenarios, it is over 10 times faster than conventional preconditioners on a CPU. (3) Because it requries only simple operations, it performs well on a GPGPU. In
Newton-Raphson preconditioner for Krylov type solvers on GPU devices.
Kushida, Noriyuki
2016-01-01
A new Newton-Raphson method based preconditioner for Krylov type linear equation solvers for GPGPU is developed, and the performance is investigated. Conventional preconditioners improve the convergence of Krylov type solvers, and perform well on CPUs. However, they do not perform well on GPGPUs, because of the complexity of implementing powerful preconditioners. The developed preconditioner is based on the BFGS Hessian matrix approximation technique, which is well known as a robust and fast nonlinear equation solver. Because the Hessian matrix in the BFGS represents the coefficient matrix of a system of linear equations in some sense, the approximated Hessian matrix can be a preconditioner. On the other hand, BFGS is required to store dense matrices and to invert them, which should be avoided on modern computers and supercomputers. To overcome these disadvantages, we therefore introduce a limited memory BFGS, which requires less memory space and less computational effort than the BFGS. In addition, a limited memory BFGS can be implemented with BLAS libraries, which are well optimized for target architectures. There are advantages and disadvantages to the Hessian matrix approximation becoming better as the Krylov solver iteration continues. The preconditioning matrix varies through Krylov solver iterations, and only flexible Krylov solvers can work well with the developed preconditioner. The GCR method, which is a flexible Krylov solver, is employed because of the prevalence of GCR as a Krylov solver with a variable preconditioner. As a result of the performance investigation, the new preconditioner indicates the following benefits: (1) The new preconditioner is robust; i.e., it converges while conventional preconditioners (the diagonal scaling, and the SSOR preconditioners) fail. (2) In the best case scenarios, it is over 10 times faster than conventional preconditioners on a CPU. (3) Because it requries only simple operations, it performs well on a GPGPU. In
NASA Astrophysics Data System (ADS)
Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María
2014-06-01
We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the
Hwang, F-N Wei, Z-H Huang, T-M Wang Weichung
2010-04-20
We develop a parallel Jacobi-Davidson approach for finding a partial set of eigenpairs of large sparse polynomial eigenvalue problems with application in quantum dot simulation. A Jacobi-Davidson eigenvalue solver is implemented based on the Portable, Extensible Toolkit for Scientific Computation (PETSc). The eigensolver thus inherits PETSc's efficient and various parallel operations, linear solvers, preconditioning schemes, and easy usages. The parallel eigenvalue solver is then used to solve higher degree polynomial eigenvalue problems arising in numerical simulations of three dimensional quantum dots governed by Schroedinger's equations. We find that the parallel restricted additive Schwarz preconditioner in conjunction with a parallel Krylov subspace method (e.g. GMRES) can solve the correction equations, the most costly step in the Jacobi-Davidson algorithm, very efficiently in parallel. Besides, the overall performance is quite satisfactory. We have observed near perfect superlinear speedup by using up to 320 processors. The parallel eigensolver can find all target interior eigenpairs of a quintic polynomial eigenvalue problem with more than 32 million variables within 12 minutes by using 272 Intel 3.0 GHz processors.
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
How to Compute Green's Functions for Entire Mass Trajectories Within Krylov Solvers
NASA Astrophysics Data System (ADS)
Glässner, Uwe; Güsken, Stephan; Lippert, Thomas; Ritzenhöfer, Gero; Schilling, Klaus; Frommer, Andreas
The availability of efficient Krylov subspace solvers plays a vital role in the solution of a variety of numerical problems in computational science. Here we consider lattice field theory. We present a new general numerical method to compute many Green's functions for complex non-singular matrices within one iteration process. Our procedure applies to matrices of structure A = D - m, with m proportional to the unit matrix, and can be integrated within any Krylov subspace solver. We can compute the derivatives x(n) of the solution vector x with respect to the parameter m and construct the Taylor expansion of x around m. We demonstrate the advantages of our method using a minimal residual solver. Here the procedure requires one intermediate vector for each Green's function to compute. As real-life example, we determine a mass trajectory of the Wilson fermion matrix for lattice QCD. Here we find that we can obtain Green's functions at all masses ≥ m at the price of one inversion at mass m.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
NASA Astrophysics Data System (ADS)
Aliaga, José I.; Alonso, Pedro; Badía, José M.; Chacón, Pablo; Davidović, Davor; López-Blanco, José R.; Quintana-Ortí, Enrique S.
2016-03-01
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousands degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.
An Inexact Newton–Krylov Algorithm for Constrained Diffeomorphic Image Registration*
Mang, Andreas; Biros, George
2016-01-01
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H1- or H2-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton–Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton–Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
Notes on Newton-Krylov based Incompressible Flow Projection Solver
Robert Nourgaliev; Mark Christon; J. Bakosi
2012-09-01
The purpose of the present document is to formulate Jacobian-free Newton-Krylov algorithm for approximate projection method used in Hydra-TH code. Hydra-TH is developed by Los Alamos National Laboratory (LANL) under the auspices of the Consortium for Advanced Simulation of Light-Water Reactors (CASL) for thermal-hydraulics applications ranging from grid-to-rod fretting (GTRF) to multiphase flow subcooled boiling. Currently, Hydra-TH is based on the semi-implicit projection method, which provides an excellent platform for simulation of transient single-phase thermalhydraulics problems. This algorithm however is not efficient when applied for very slow or steady-state problems, as well as for highly nonlinear multiphase problems relevant to nuclear reactor thermalhydraulics with boiling and condensation. These applications require fully-implicit tightly-coupling algorithms. The major technical contribution of the present report is the formulation of fully-implicit projection algorithm which will fulfill this purpose. This includes the definition of non-linear residuals used for GMRES-based linear iterations, as well as physics-based preconditioning techniques.
Newton-Krylov-Schwarz methods in unstructured grid Euler flow
Keyes, D.E.
1996-12-31
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton`s method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on an aerodynamic application emphasizing comparisons with a standard defect-correction approach and subdomain preconditioner consistency.
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
Nonlinear Krylov acceleration of reacting flow codes
Kumar, S.; Rawat, R.; Smith, P.; Pernice, M.
1996-12-31
We are working on computational simulations of three-dimensional reactive flows in applications encompassing a broad range of chemical engineering problems. Examples of such processes are coal (pulverized and fluidized bed) and gas combustion, petroleum processing (cracking), and metallurgical operations such as smelting. These simulations involve an interplay of various physical and chemical factors such as fluid dynamics with turbulence, convective and radiative heat transfer, multiphase effects such as fluid-particle and particle-particle interactions, and chemical reaction. The governing equations resulting from modeling these processes are highly nonlinear and strongly coupled, thereby rendering their solution by traditional iterative methods (such as nonlinear line Gauss-Seidel methods) very difficult and sometimes impossible. Hence we are exploring the use of nonlinear Krylov techniques (such as CMRES and Bi-CGSTAB) to accelerate and stabilize the existing solver. This strategy allows us to take advantage of the problem-definition capabilities of the existing solver. The overall approach amounts to using the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) method and its variants as nonlinear preconditioners for the nonlinear Krylov method. We have also adapted a backtracking approach for inexact Newton methods to damp the Newton step in the nonlinear Krylov method. This will be a report on work in progress. Preliminary results with nonlinear GMRES have been very encouraging: in many cases the number of line Gauss-Seidel sweeps has been reduced by about a factor of 5, and increased robustness of the underlying solver has also been observed.
Subspace Detectors: Efficient Implementation
Harris, D B; Paik, T
2006-07-26
The optimum detector for a known signal in white Gaussian background noise is the matched filter, also known as a correlation detector [Van Trees, 1968]. Correlation detectors offer exquisite sensitivity (high probability of detection at a fixed false alarm rate), but require perfect knowledge of the signal. The sensitivity of correlation detectors is increased by the availability of multichannel data, something common in seismic applications due to the prevalence of three-component stations and arrays. When the signal is imperfectly known, an extension of the correlation detector, the subspace detector, may be able to capture much of the performance of a matched filter [Harris, 2006]. In order to apply a subspace detector, the signal to be detected must be known to lie in a signal subspace of dimension d {ge} 1, which is defined by a set of d linearly-independent basis waveforms. The basis is constructed to span the range of signals anticipated to be emitted by a source of interest. Correlation detectors operate by computing a running correlation coefficient between a template waveform (the signal to be detected) and the data from a window sliding continuously along a data stream. The template waveform and the continuous data stream may be multichannel, as would be true for a three-component seismic station or an array. In such cases, the appropriate correlation operation computes the individual correlations channel-for-channel and sums the result (Figure 1). Both the waveform matching that occurs when a target signal is present and the cross-channel stacking provide processing gain. For a three-component station processing gain occurs from matching the time-history of the signals and their polarization structure. The projection operation that is at the heart of the subspace detector can be expensive to compute if implemented in a straightforward manner, i.e. with direct-form convolutions. The purpose of this report is to indicate how the projection can be
A Newton-Krylov solution to the porous medium equations in the agree code
Ward, A. M.; Seker, V.; Xu, Y.; Downar, T. J.
2012-07-01
In order to improve the convergence of the AGREE code for porous medium, a Newton-Krylov solver was developed for steady state problems. The current three-equation system was expanded and then coupled using Newton's Method. Theoretical behavior predicts second order convergence, while actual behavior was highly nonlinear. The discontinuous derivatives found in both closure and empirical relationships prevented true second order convergence. Agreement between the current solution and new Exact Newton solution was well below the convergence criteria. While convergence time did not dramatically decrease, the required number of outer iterations was reduced by approximately an order of magnitude. GMRES was also used to solve problem, where ILU without fill-in was used to precondition the iterative solver, and the performance was slightly slower than the direct solution. (authors)
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions. PMID:19110492
Geometric mean for subspace selection.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2009-02-01
Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Molecular mechanism of preconditioning.
Das, Manika; Das, Dipak K
2008-04-01
During the last 20 years, since the appearance of the first publication on ischemic preconditioning (PC), our knowledge of this phenomenon has increased exponentially. PC is defined as an increased tolerance to ischemia and reperfusion induced by previous sublethal period ischemia. This is the most powerful mechanism known to date for limiting the infract size. This adaptation occurs in a biphasic pattern (i) early preconditioning (lasts for 2-3 h) and (ii) late preconditioning (starting at 24 h lasting until 72-96 h after initial ischemia). Early preconditioning is more potent than delayed preconditioning in reducing infract size. Late preconditioning attenuates myocardial stunning and requires genomic activation with de novo protein synthesis. Early preconditioning depends on adenosine, opioids and to a lesser degree, on bradykinin and prostaglandins, released during ischemia. These molecules activate G-protein-coupled receptor, initiate activation of K(ATP) channel and generate oxygen-free radicals, and stimulate a series of protein kinases, which include protein kinase C, tyrosine kinase, and members of MAP kinase family. Late preconditioning is triggered by a similar sequence of events, but in addition essentially depends on newly synthesized proteins, which comprise iNOS, COX-2, manganese superoxide dismutase, and possibly heat shock proteins. The final mechanism of PC is still not very clear. The present review focuses on the possible role signaling molecules that regulate cardiomyocyte life and death during ischemia and reperfusion. PMID:18344203
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
Skyline View: Efficient Distributed Subspace Skyline Computation
NASA Astrophysics Data System (ADS)
Kim, Jinhan; Lee, Jongwuk; Hwang, Seung-Won
Skyline queries have gained much attention as alternative query semantics with pros (e.g.low query formulation overhead) and cons (e.g.large control over result size). To overcome the cons, subspace skyline queries have been recently studied, where users iteratively specify relevant feature subspaces on search space. However, existing works mainly focuss on centralized databases. This paper aims to extend subspace skyline computation to distributed environments such as the Web, where the most important issue is to minimize the cost of accessing vertically distributed objects. Toward this goal, we exploit prior skylines that have overlapped subspaces to the given subspace. In particular, we develop algorithms for three scenarios- when the subspace of prior skylines is superspace, subspace, or the rest. Our experimental results validate that our proposed algorithm shows significantly better performance than the state-of-the-art algorithms.
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Discriminant Subspace Learning for Microcalcification Clusters Detection
NASA Astrophysics Data System (ADS)
Zhang, X.-S.; Xie, Hua
This paper presents a novel approach to microcalcification clusters (MCs) detection in mammograms based on the discriminant subspace learning. The ground truth of MCs in mammograms is assumed to be known as a priori. Several typical subspace learning algorithms, such as principal component analysis (PCA), linear discriminant analysis (LDA), tensor subspace analysis (TSA) and general tensor discriminant Analysis (GTDA), are employed to extract subspace features. In subspace feature domain, the MCs detection procedure is formulated as a supervised learning and classification problem, and SVM is used as a classifier to make decision for the presence of MCs or not. A large number of experiments are carried out to evaluate and compare the performance of the proposed MCs detection algorithms. The experiment result suggests that correlation filters is a promising technique for MCs detection.
Cai, Y.; Navon, I.M.
1995-11-01
In this paper, the authors report their work on applying Krylov iterative methods, accelerated by parallelizable domain-decomposed (DD) preconditioners, to the solution of nonsymmetric linear algebraic equations arising from implicit time discretization of a finite element model of the shallow water equations on a limited-area domain. Two types of previously proposed DD preconditioners are employed and a novel one is advocated to accelerate, with post-preconditioning, the convergence of three popular and competitive Krylov iterative linear solvers. Performance sensitivities of these preconditioners to inexact subdomain solvers are also reported. Autotasking, the parallel processing capability representing the third phase of multitasking libraries on CRAY Y-MP, has been exploited and successfully applied to both loop and subroutine level parallelization. Satisfactory speedup results were obtained. On the other hand, automatic loop-level parallelization, made possible by the autotasking preprocessor, attained only a speedup smaller than a factor of two. 39 refs., 2 figs., 6 tabs.
Face recognition with L1-norm subspaces
NASA Astrophysics Data System (ADS)
Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.
2016-05-01
We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.
Enhancing bilinear subspace learning by element rearrangement.
Xu, Dong; Yan, Shuicheng; Lin, Stephen; Huang, Thomas S; Chang, Shih-Fu
2009-10-01
The success of bilinear subspace learning heavily depends on reducing correlations among features along rows and columns of the data matrices. In this work, we study the problem of rearranging elements within a matrix in order to maximize these correlations so that information redundancy in matrix data can be more extensively removed by existing bilinear subspace learning algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer programming problem. In each step, the matrix structure is refined with a constrained Earth Mover's Distance procedure that incrementally rearranges matrices to become more similar to their low-rank approximations, which have high correlation among features along rows and columns. In addition, we present two extensions of the algorithm for conducting supervised bilinear subspace learning. Experiments in both unsupervised and supervised bilinear subspace learning demonstrate the effectiveness of our proposed algorithms in improving data compression performance and classification accuracy.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; Tuminaro, R. S.; Chacon, L.; Weber, P. D.
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
A Hybrid, Parallel Krylov Solver for MODFLOW using Schwarz Domain Decomposition
NASA Astrophysics Data System (ADS)
Sutanudjaja, E.; Verkaik, J.; Hughes, J. D.
2015-12-01
In order to support decision makers in solving hydrological problems, detailed high-resolution models are often needed. These models typically consist of a large number of computational cells and have large memory requirements and long run times. An efficient technique for obtaining realistic run times and memory requirements is parallel computing, where the problem is divided over multiple processor cores. The new Parallel Krylov Solver (PKS) for MODFLOW-USG is presented. It combines both distributed memory parallelization by the Message Passing Interface (MPI) and shared memory parallelization by Open Multi-Processing (OpenMP). PKS includes conjugate gradient and biconjugate gradient stabilized linear accelerators that are both preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using the METIS library; b) each subdomain uses local memory only and communicates with other subdomains by MPI within the linear accelerator; c) is fully integrated in the MODFLOW-USG code. PKS is based on the unstructured PCGU-solver, and supports OpenMP. Depending on the available hardware, PKS can run exclusively with MPI, exclusively with OpenMP, or with a hybrid MPI/OpenMP approach. Benchmarks were performed on the Cartesius Dutch supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 144 cores, for a synthetic test (~112 million cells) and the Indonesia groundwater model (~4 million 1km cells). The latter, which includes all islands in the Indonesian archipelago, was built using publically available global datasets, and is an ideal test bed for evaluating the applicability of PKS parallelization techniques to a global groundwater model consisting of multiple continents and islands. Results show that run time reductions can be greatest with the hybrid parallelization approach for the problems tested.
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
Krylov-Projected Quantum Monte Carlo Method.
Blunt, N S; Alavi, Ali; Booth, George H
2015-07-31
We present an approach to the calculation of arbitrary spectral, thermal, and excited state properties within the full configuration interaction quzantum Monte Carlo framework. This is achieved via an unbiased projection of the Hamiltonian eigenvalue problem into a space of stochastically sampled Krylov vectors, thus, enabling the calculation of real-frequency spectral and thermal properties and avoiding explicit analytic continuation. We use this approach to calculate temperature-dependent properties and one- and two-body spectral functions for various Hubbard models, as well as isolated excited states in ab initio systems. PMID:26274406
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Numerical solution of large nonsymmetric eigenvalue problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
Several methods are discribed for combinations of Krylov subspace techniques, deflation procedures and preconditionings, for computing a small number of eigenvalues and eigenvectors or Schur vectors of large sparse matrices. The most effective techniques for solving realistic problems from applications are those methods based on some form of preconditioning and one of several Krylov subspace techniques, such as Arnoldi's method or Lanczos procedure. Two forms of preconditioning are considered: shift-and-invert and polynomial acceleration. The latter presents some advantages for parallel/vector processing but may be ineffective if eigenvalues inside the spectrum are sought. Some algorithmic details are provided that improve the reliability and effectiveness of these techniques.
Random subspaces in quantum information theory
NASA Astrophysics Data System (ADS)
Hayden, Patrick
2005-03-01
The selection of random unitary transformations plays a role in quantum information theory analogous to the role of random hash functions in classical information theory. Recent applications have included protocols achieving the quantum channel capacity and methods for extending superdense coding from bits to qubits. In addition, the corresponding random subspaces have proved useful for studying the structure of bipartite and multipartite entanglement. In quantum information theory, we're fond of saying that Hilbert space is a big place, the implication being that there's room for the unexpected to occur. The goal of this talk is to further bolster this homespun wisdowm. I'm going to present a number of results in quantum information theory that stem from the initially counterintuitive geometry of high-dimensional vector spaces, where subspaces with highly extremal properties are the norm rather than the exception. Peter Shor has shown, for example, that randomly selected subspaces can be used to send quantum information through a noisy quantum channel at the highest possible rate, that is, the quantum channel capacity. More recently, Debbie Leung, Andreas Winter and I demonstrated that a randomly chosen subspace of a bipartite quantum system will likely contain nothing but nearly maximally entangled states, even if the subspace is nearly as large as the original system in qubit terms. This observation has implications for communication, especially superdense coding.
Signal subspace integration for improved seizure localization
Stamoulis, Catherine; Fernández, Iván Sánchez; Chang, Bernard S.; Loddenkemper, Tobias
2012-01-01
A subspace signal processing approach is proposed for improved scalp EEG-based localization of broad-focus epileptic seizures, and estimation of the directions of source arrivals (DOA). Ictal scalp EEGs from adult and pediatric patients with broad-focus seizures were first decomposed into dominant signal modes, and signal and noise subspaces at each modal frequency, to improve the signal-to-noise ratio while preserving the original data correlation structure. Transformed (focused) modal signals were then resynthesized into wideband signals from which the number of sources and DOA were estimated. These were compared to denoised signals via principal components analysis (PCA). Coherent subspace processing performed better than PCA, significantly improved the localization of ictal EEGs and the estimation of distinct sources and corresponding DOAs. PMID:23366067
Signal subspace integration for improved seizure localization.
Stamoulis, Catherine; Fernández, Iván Sánchez; Chang, Bernard S; Loddenkemper, Tobias
2012-01-01
A subspace signal processing approach is proposed for improved scalp EEG-based localization of broad-focus epileptic seizures, and estimation of the directions of source arrivals (DOA). Ictal scalp EEGs from adult and pediatric patients with broad-focus seizures were first decomposed into dominant signal modes, and signal and noise subspaces at each modal frequency, to improve the signal-to-noise ratio while preserving the original data correlation structure. Transformed (focused) modal signals were then resynthesized into wideband signals from which the number of sources and DOA were estimated. These were compared to denoised signals via principal components analysis (PCA). Coherent subspace processing performed better than PCA, significantly improved the localization of ictal EEGs and the estimation of distinct sources and corresponding DOAs. PMID:23366067
Symmetric subspace learning for image analysis.
Papachristou, Konstantinos; Tefas, Anastasios; Pitas, Ioannis
2014-12-01
Subspace learning (SL) is one of the most useful tools for image analysis and recognition. A large number of such techniques have been proposed utilizing a priori knowledge about the data. In this paper, new subspace learning techniques are presented that use symmetry constraints in their objective functions. The rational behind this idea is to exploit the a priori knowledge that geometrical symmetry appears in several types of data, such as images, objects, faces, and so on. Experiments on artificial, facial expression recognition, face recognition, and object categorization databases highlight the superiority and the robustness of the proposed techniques, in comparison with standard SL techniques.
Real Space DFT by Locally Optimal Block Preconditioned Conjugate Gradient Method
NASA Astrophysics Data System (ADS)
Michaud, Vincent; Guo, Hong
2012-02-01
Real space approaches solve the Kohn-Sham (KS) DFT problem as a system of partial differential equations (PDE) in real space numerical grids. In such techniques, the Hamiltonian matrix is typically much larger but sparser than the matrix arising in state-of-the-art DFT codes which are often based on directly minimizing the total energy functional. Evidence of good performance of real space methods - by Chebyshev filtered subspace iteration (CFSI) - was reported by Zhou, Saad, Tiago and Chelikowsky [1]. We found that the performance of the locally optimal block preconditioned conjugate gradient method (LOGPCG) introduced by Knyazev [2], when used in conjunction with CFSI, generally exceeds that of CFSI for solving the KS equations. We will present our implementation of the LOGPCG based real space electronic structure calculator. [4pt] [1] Y. Zhou, Y. Saad, M. L. Tiago, and J. R. Chelikowsky, ``Self-consistent-field calculations using Chebyshev-filtered subspace iteration,'' J. Comput. Phys., vol. 219,pp. 172-184, November 2006. [0pt] [2] A. V. Knyazev, ``Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method,'' SIAM J. Sci. Comput, vol. 23, pp. 517-541, 2001.
Subspace Identification with Multiple Data Sets
NASA Technical Reports Server (NTRS)
Duchesne, Laurent; Feron, Eric; Paduano, James D.; Brenner, Marty
1995-01-01
Most existing subspace identification algorithms assume that a single input to output data set is available. Motivated by a real life problem on the F18-SRA experimental aircraft, we show how these algorithms are readily adapted to handle multiple data sets. We show by means of an example the relevance of such an improvement.
Efficient and Portable Krylov Eigensolver on Many Core Architectures
NASA Astrophysics Data System (ADS)
Calvin, C.; Petiton, S.; Ye, F.; Boillod-Cerneux, F.
2014-06-01
We present in this article a highly parallel Krylov solver for large eigenvalue problems, The Explicit Restarted Arnoldi Method (ERAM). Our ERAM implementation may be executed on many core configurations, both homogeneous and heterogeneous ones, in order to take advantage of most of present and future supercomputers. From these experiments, we propose our approach for designing efficient and portable algorithms on multi-core architectures. It is based on the design of generic algorithms using TRILINOS approach and specialized implementation of elementary operations (matrix-matrix, matrix-vector, scalar product ...) on accelerators mentioned above. Some results on large sparse and dense matrices on petascale class machines using CPU and GPUs, and some first results obtained on Intel MIC processor are presented and analysed.
Compressive Detection of Random Subspace Signals
NASA Astrophysics Data System (ADS)
Razavi, Alireza; Valkama, Mikko; Cabric, Danijela
2016-08-01
The problem of compressive detection of random subspace signals is studied. We consider signals modeled as $\\mathbf{s} = \\mathbf{H} \\mathbf{x}$ where $\\mathbf{H}$ is an $N \\times K$ matrix with $K \\le N$ and $\\mathbf{x} \\sim \\mathcal{N}(\\mathbf{0}_{K,1},\\sigma_x^2 \\mathbf{I}_K)$. We say that signal $\\mathbf{s}$ lies in or leans toward a subspace if the largest eigenvalue of $\\mathbf{H} \\mathbf{H}^T$ is strictly greater than its smallest eigenvalue. We first design a measurement matrix $\\mathbf{\\Phi}=[\\mathbf{\\Phi}_s^T,\\mathbf{\\Phi}_o^T]^T$ comprising of two sub-matrices $\\mathbf{\\Phi}_s$ and $\\mathbf{\\Phi}_o$ where $\\mathbf{\\Phi}_s$ projects the signals to the strongest left-singular vectors, i.e., the left-singular vectors corresponding to the largest singular values, of subspace matrix $\\mathbf{H}$ and $\\mathbf{\\Phi}_o$ projects it to the weakest left-singular vectors. We then propose two detectors which work based on the difference in energies of the samples measured by two sub-matrices $\\mathbf{\\Phi}_s$ and $\\mathbf{\\Phi}_o$ and prove their optimality. Simplified versions of the proposed detectors for the case when the variance of noise is known are also provided. Furthermore, we study the performance of the detector when measurements are imprecise and show how imprecision can be compensated by employing more measurement devices. The problem is then re-formulated for the case when the signal lies in the union of a finite number of linear subspaces instead of a single linear subspace. Finally, we study the performance of the proposed methods by simulation examples.
General purpose nonlinear system solver based on Newton-Krylov method.
2013-12-01
KINSOL is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/Algebraic equation Solvers [1]. KINSOL is a general-purpose nonlinear system solver based on Newton-Krylov and fixed-point solver technologies [2].
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-06-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Analyzing hyperspectral images into multiple subspaces using Gaussian mixture models
NASA Astrophysics Data System (ADS)
Spence, Clay D.
2016-05-01
I argue that the spectra in a hyperspectral datacube will usually lie in several low-dimensional subspaces, and that these subspaces are more easily estimated from the data than the endmembers. I present an algorithm for finding the subspaces. The algorithm fits the data with a Gaussian mixture model, in which the means and covariance matrices are parameterized in terms of the subspaces. The locations of materials can be inferred from the fit of library spectra to the subspaces. The algorithm can be modified to perform material detection. This has better performance than standard algorithms such as ACE, and runs in real time.
HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.
Exploiting Unsupervised and Supervised Constraints for Subspace Clustering.
Hu, Han; Feng, Jianjiang; Zhou, Jie
2015-08-01
Data in many image and video analysis tasks can be viewed as points drawn from multiple low-dimensional subspaces with each subspace corresponding to one category or class. One basic task for processing such kind of data is to separate the points according to the underlying subspace, referred to as subspace clustering. Extensive studies have been made on this subject, and nearly all of them use unconstrained subspace models, meaning the points can be drawn from everywhere of a subspace, to represent the data. In this paper, we attempt to do subspace clustering based on a constrained subspace assumption that the data is further restricted in the corresponding subspaces, e.g., belonging to a submanifold or satisfying the spatial regularity constraint. This assumption usually describes the real data better, such as differently moving objects in a video scene and face images of different subjects under varying illumination. A unified integer linear programming optimization framework is used to approach subspace clustering, which can be efficiently solved by a branch-and-bound (BB) method. We also show that various kinds of supervised information, such as subspace number, outlier ratio, pairwise constraints, size prior and etc., can be conveniently incorporated into the proposed framework. Experiments on real data show that the proposed method outperforms the state-of-the-art algorithms significantly in clustering accuracy. The effectiveness of the proposed method in exploiting supervised information is also demonstrated. PMID:26352994
Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996
Walker, H.F.
1997-09-01
The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.
Boundary-aware multidomain subspace deformation.
Yang, Yin; Xu, Weiwei; Guo, Xiaohu; Zhou, Kun; Guo, Baining
2013-10-01
In this paper, we propose a novel framework for multidomain subspace deformation using node-wise corotational elasticity. With the proper construction of subspaces based on the knowledge of the boundary deformation, we can use the Lagrange multiplier technique to impose coupling constraints at the boundary without overconstraining. In our deformation algorithm, the number of constraint equations to couple two neighboring domains is not related to the number of the nodes on the boundary but is the same as the number of the selected boundary deformation modes. The crack artifact is not present in our simulation result, and the domain decomposition with loops can be easily handled. Experimental results show that the single-core implementation of our algorithm can achieve real-time performance in simulating deformable objects with around quarter million tetrahedral elements. PMID:23929845
Spectral face recognition using orthogonal subspace bases
NASA Astrophysics Data System (ADS)
Wimberly, Andrew; Robila, Stefan A.; Peplau, Tansy
2010-04-01
We present an efficient method for facial recognition using hyperspectral imaging and orthogonal subspaces. Projecting the data into orthogonal subspaces has the advantage of compactness and reduction of redundancy. We focus on two approaches: Principal Component Analysis and Orthogonal Subspace Projection. Our work is separated in three stages. First, we designed an experimental setup that allowed us to create a hyperspectral image database of 17 subjects under different facial expressions and viewing angles. Second, we investigated approaches to employ spectral information for the generation of fused grayscale images. Third, we designed and tested a recognition system based on the methods described above. The experimental results show that spectral fusion leads to improvement of recognition accuracy when compared to regular imaging. The work expands on previous band extraction research and has the distinct advantage of being one of the first that combines spatial information (i.e. face characteristics) with spectral information. In addition, the techniques are general enough to accommodate differences in skin spectra.
Orderings for conjugate gradient preconditionings
NASA Technical Reports Server (NTRS)
Ortega, James M.
1991-01-01
The effect of orderings on the rate of convergence of the conjugate gradient method with SSOR or incomplete Cholesky preconditioning is examined. Some results also are presented that help to explain why red/black ordering gives an inferior rate of convergence.
NASA Astrophysics Data System (ADS)
Simmons, Alex; Yang, Qianqian; Moroney, Timothy
2015-04-01
The numerical solution of fractional partial differential equations poses significant computational challenges in regard to efficiency as a result of the spatial nonlocality of the fractional differential operators. The dense coefficient matrices that arise from spatial discretisation of these operators mean that even one-dimensional problems can be difficult to solve using standard methods on grids comprising thousands of nodes or more. In this work we address this issue of efficiency for one-dimensional, nonlinear space-fractional reaction-diffusion equations with fractional Laplacian operators. We apply variable-order, variable-stepsize backward differentiation formulas in a Jacobian-free Newton-Krylov framework to advance the solution in time. A key advantage of this approach is the elimination of any requirement to form the dense matrix representation of the fractional Laplacian operator. We show how a banded approximation to this matrix, which can be formed and factorised efficiently, can be used as part of an effective preconditioner that accelerates convergence of the Krylov subspace iterative solver. Our approach also captures the full contribution from the nonlinear reaction term in the preconditioner, which is crucial for problems that exhibit stiff reactions. Numerical examples are presented to illustrate the overall effectiveness of the solver.
The variational subspace valence bond method
Fletcher, Graham D.
2015-04-07
The variational subspace valence bond (VSVB) method based on overlapping orbitals is introduced. VSVB provides variational support against collapse for the optimization of overlapping linear combinations of atomic orbitals (OLCAOs) using modified orbital expansions, without recourse to orthogonalization. OLCAO have the advantage of being naturally localized, chemically intuitive (to individually model bonds and lone pairs, for example), and transferrable between different molecular systems. Such features are exploited to avoid key computational bottlenecks. Since the OLCAO can be doubly occupied, VSVB can access very large problems, and calculations on systems with several hundred atoms are presented.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Preconditioning for traumatic brain injury
Yokobori, Shoji; Mazzeo, Anna T; Hosein, Khadil; Gajavelli, Shyam; Dietrich, W. Dalton; Bullock, M. Ross
2016-01-01
Traumatic brain injury (TBI) treatment is now focused on the prevention of primary injury and reduction of secondary injury. However, no single effective treatment is available as yet for the mitigation of traumatic brain damage in humans. Both chemical and environmental stresses applied before injury, have been shown to induce consequent protection against post-TBI neuronal death. This concept termed “preconditioning” is achieved by exposure to different pre-injury stressors, to achieve the induction of “tolerance” to the effect of the TBI. However, the precise mechanisms underlying this “tolerance” phenomenon are not fully understood in TBI, and therefore even less information is available about possible indications in clinical TBI patients. In this review we will summarize TBI pathophysiology, and discuss existing animal studies demonstrating the efficacy of preconditioning in diffuse and focal type of TBI. We will also review other non-TBI preconditionng studies, including ischemic, environmental, and chemical preconditioning, which maybe relevant to TBI. To date, no clinical studies exist in this field, and we speculate on possible futureclinical situation, in which pre-TBI preconditioning could be considered. PMID:24323189
Classes of Invariant Subspaces for Some Operator Algebras
NASA Astrophysics Data System (ADS)
Hamhalter, Jan; Turilova, Ekaterina
2014-10-01
New results showing connections between structural properties of von Neumann algebras and order theoretic properties of structures of invariant subspaces given by them are proved. We show that for any properly infinite von Neumann algebra M there is an affiliated subspace such that all important subspace classes living on are different. Moreover, we show that can be chosen such that the set of σ-additive measures on subspace classes of are empty. We generalize measure theoretic criterion on completeness of inner product spaces to affiliated subspaces corresponding to Type I factor with finite dimensional commutant. We summarize hitherto known results in this area, discuss their importance for mathematical foundations of quantum theory, and outline perspectives of further research.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Scalable parallel Newton-Krylov solvers for discontinuous Galerkin discretizations
Persson, P.-O.
2008-12-31
We present techniques for implicit solution of discontinuous Galerkin discretizations of the Navier-Stokes equations on parallel computers. While a block-Jacobi method is simple and straight-forward to parallelize, its convergence properties are poor except for simple problems. Therefore, we consider Newton-GMRES methods preconditioned with block-incomplete LU factorizations, with optimized element orderings based on a minimum discarded fill (MDF) approach. We discuss the difficulties with the parallelization of these methods, but also show that with a simple domain decomposition approach, most of the advantages of the block-ILU over the block-Jacobi preconditioner are still retained. The convergence is further improved by incorporating the matrix connectivities into the mesh partitioning process, which aims at minimizing the errors introduced from separating the partitions. We demonstrate the performance of the schemes for realistic two- and three-dimensional flow problems.
Preconditioned iterations to calculate extreme eigenvalues
Brand, C.W.; Petrova, S.
1994-12-31
Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.
NASA Astrophysics Data System (ADS)
De Maio, Antonio; Orlando, Danilo
2016-04-01
This paper deals with adaptive radar detection of a subspace signal competing with two sources of interference. The former is Gaussian with unknown covariance matrix and accounts for the joint presence of clutter plus thermal noise. The latter is structured as a subspace signal and models coherent pulsed jammers impinging on the radar antenna. The problem is solved via the Principle of Invariance which is based on the identification of a suitable group of transformations leaving the considered hypothesis testing problem invariant. A maximal invariant statistic, which completely characterizes the class of invariant decision rules and significantly compresses the original data domain, as well as its statistical characterization are determined. Thus, the existence of the optimum invariant detector is addressed together with the design of practically implementable invariant decision rules. At the analysis stage, the performance of some receivers belonging to the new invariant class is established through the use of analytic expressions.
Tensor-Krylov methods for solving large-scale systems of nonlinear equations.
Bader, Brett William
2004-08-01
This paper develops and investigates iterative tensor methods for solving large-scale systems of nonlinear equations. Direct tensor methods for nonlinear equations have performed especially well on small, dense problems where the Jacobian matrix at the solution is singular or ill-conditioned, which may occur when approaching turning points, for example. This research extends direct tensor methods to large-scale problems by developing three tensor-Krylov methods that base each iteration upon a linear model augmented with a limited second-order term, which provides information lacking in a (nearly) singular Jacobian. The advantage of the new tensor-Krylov methods over existing large-scale tensor methods is their ability to solve the local tensor model to a specified accuracy, which produces a more accurate tensor step. The performance of these methods in comparison to Newton-GMRES and tensor-GMRES is explored on three Navier-Stokes fluid flow problems. The numerical results provide evidence that tensor-Krylov methods are generally more robust and more efficient than Newton-GMRES on some important and difficult problems. In addition, the results show that the new tensor-Krylov methods and tensor- GMRES each perform better in certain situations.
Robust video hashing via multilinear subspace projections.
Li, Mu; Monga, Vishal
2012-10-01
The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.
Ischemic preconditioning and clinical scenarios
Narayanan, Srinivasan V.; Dave, Kunjan R.; Perez-Pinzon, Miguel A.
2013-01-01
Purpose of review Ischemic preconditioning (IPC) is gaining attention as a novel neuroprotective therapy and could provide an improved mechanistic understanding of tolerance to cerebral ischemia. The purpose of this article is to review the recent work in the field of IPC and its applications to clinical scenarios. Recent findings The cellular signaling pathways that are activated following IPC are now better understood and have enabled investigators to identify several IPC mimetics. Most of these studies were performed in rodents, and efficacy of these mimetics remains to be evaluated in human patients. Additionally, remote ischemic preconditioning (RIPC) may have higher translational value than IPC. Repeated cycles of temporary ischemia in a remote organ can activate protective pathways in the target organ, including the heart and brain. Clinical trials are underway to test the efficacy of RIPC in protecting brain against subarachnoid hemorrhage. Summary IPC, RIPC, and IPC mimetics have the potential to be therapeutic in various clinical scenarios. Further understanding of IPC-induced neuroprotection pathways and utilization of clinically relevant animal models are necessary to increase the translational potential of IPC in the near future. PMID:23197083
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with the “General vehicle handling requirements” per 40 CFR 86.132-96, up to and including the completion of the hot start exhaust test. (b) The preconditioning procedure prescribed at 40 CFR 86.132-96... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Vehicle preconditioning. 80.52...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accordance with the “General vehicle handling requirements” per 40 CFR 86.132-96, up to and including the completion of the hot start exhaust test. (b) The preconditioning procedure prescribed at 40 CFR 86.132-96... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Vehicle preconditioning. 80.52...
40 CFR 1065.518 - Engine preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., such as with a diesel engine that relies on urea-based selective catalytic reduction. Note that § 1065... catalyst. Perform preconditioning as follows, noting that the specific cycles for preconditioning are the... cycle specified in 40 CFR 1039.505(b)(1), the second half of the cycle consists of modes three...
Ischemic preconditioning protects against ischemic brain injury
Ma, Xiao-meng; Liu, Mei; Liu, Ying-ying; Ma, Li-li; Jiang, Ying; Chen, Xiao-hong
2016-01-01
In this study, we hypothesized that an increase in integrin αvβ3 and its co-activator vascular endothelial growth factor play important neuroprotective roles in ischemic injury. We performed ischemic preconditioning with bilateral common carotid artery occlusion for 5 minutes in C57BL/6J mice. This was followed by ischemic injury with bilateral common carotid artery occlusion for 30 minutes. The time interval between ischemic preconditioning and lethal ischemia was 48 hours. Histopathological analysis showed that ischemic preconditioning substantially diminished damage to neurons in the hippocampus 7 days after ischemia. Evans Blue dye assay showed that ischemic preconditioning reduced damage to the blood-brain barrier 24 hours after ischemia. This demonstrates the neuroprotective effect of ischemic preconditioning. Western blot assay revealed a significant reduction in protein levels of integrin αvβ3, vascular endothelial growth factor and its receptor in mice given ischemic preconditioning compared with mice not given ischemic preconditioning 24 hours after ischemia. These findings suggest that the neuroprotective effect of ischemic preconditioning is associated with lower integrin αvβ3 and vascular endothelial growth factor levels in the brain following ischemia. PMID:27335560
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Application of Subspace Clustering in DNA Sequence Analysis.
Wallace, Tim; Sekmen, Ali; Wang, Xiaofei
2015-10-01
Identification and clustering of orthologous genes plays an important role in developing evolutionary models such as validating convergent and divergent phylogeny and predicting functional proteins in newly sequenced species of unverified nucleotide protein mappings. Here, we introduce an application of subspace clustering as applied to orthologous gene sequences and discuss the initial results. The working hypothesis is based upon the concept that genetic changes between nucleotide sequences coding for proteins among selected species and groups may lie within a union of subspaces for clusters of the orthologous groups. Estimates for the subspace dimensions were computed for a small population sample. A series of experiments was performed to cluster randomly selected sequences. The experimental design allows for both false positives and false negatives, and estimates for the statistical significance are provided. The clustering results are consistent with the main hypothesis. A simple random mutation binary tree model is used to simulate speciation events that show the interdependence of the subspace rank versus time and mutation rates. The simple mutation model is found to be largely consistent with the observed subspace clustering singular value results. Our study indicates that the subspace clustering method may be applied in orthology analysis. PMID:26162018
Study on Subspace Control Based on Modal Analysis
NASA Astrophysics Data System (ADS)
Sonobe, Motomichi; Kondou, Takahiro; Sowa, Nobuyuki; Matsuzaki, Kenichiro
As a new control technique called the subspace control method is developed in an effort to carry out finely tuned control easily and efficiently for a complicated and large-scale mechanical system. In the subspace control method, the minimum and optimum subspace suited for the control specification is extracted from the entire state space by applying the concept of modal analysis, and feedback control based on the modal coordinate is performed in the subspace. The subspace control method takes advantage of the dynamic characteristics of the controlled object in the design of control system. In addition, decreasing the dimension of the controlled object based on the dynamic characteristics leads to simplification of the design of control system, reduction of mechanical overload caused by the control, and a reduction in consumed electric power. In the present study, in order to clarify the fundamental concept, the subspace control method is formulated for swing-up and stabilizing controls of an inverted pendulum system. The effectiveness of the proposed method is verified by numerical simulations and experiments.
Neurophysiological preconditions of syntax acquisition.
Friederici, Angela D; Oberecker, Regine; Brauer, Jens
2012-03-01
Although the neural network for language processing in the adult brain is well specified, the neural underpinning of language acquisition is still underdetermined. Here, we define the milestones of syntax acquisition and discuss the possible neurophysiological preconditions thereof. Early language learning seems to be based on the bilateral temporal cortices. Subsequent syntax acquisition apparently primarily recruits a neural network involving the left frontal cortex and the temporal cortex connected by a ventrally located fiber system. The late developing ability to comprehend syntactically complex sentences appears to require a neural network that connects Broca's area to the left posterior temporal cortex via a dorsally located fiber pathway. Thus, acquisition of syntax requires the maturation of fiber bundles connecting the classical language-relevant brain regions. PMID:21706312
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.
1996-12-31
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
A Newton-Krylov Solver for Implicit Solution of Hydrodynamics in Core Collapse Supernovae
Reynolds, D R; Swesty, F D; Woodward, C S
2008-06-12
This paper describes an implicit approach and nonlinear solver for solution of radiation-hydrodynamic problems in the context of supernovae and proto-neutron star cooling. The robust approach applies Newton-Krylov methods and overcomes the difficulties of discontinuous limiters in the discretized equations and scaling of the equations over wide ranges of physical behavior. We discuss these difficulties, our approach for overcoming them, and numerical results demonstrating accuracy and efficiency of the method.
Subspace model identification of guided wave propagation in metallic plates
NASA Astrophysics Data System (ADS)
Kim, Junhee; Kim, Kiyoung; Sohn, Hoon
2014-03-01
In this study, a data-driven subspace system identification approach is proposed for modeling guided wave propagation in plate media. In the data-driven approach, the subspace system identification estimates a mathematical model fitted to experimentally measured data, but the black-box model identified captures the dynamics of wave propagation. To demonstrate the versatility of the black-box model, wave motions in various shapes of aluminum plates are investigated in the study. In addition, a waveform predictor and temperature change indicator are proposed as applications of the black-box models, to further promote the modeling approach to guided wave propagation.
Selective control of the symmetric Dicke subspace in trapped ions
Lopez, C. E.; Retamal, J. C.; Solano, E.
2007-09-15
We propose a method of manipulating selectively the symmetric Dicke subspace in the internal degrees of freedom of N trapped ions. We show that the direct access to ionic-motional subspaces, based on a suitable tuning of motion-dependent ac Stark shifts, induces a two-level dynamics involving previously selected ionic Dicke states. In this manner, it is possible to produce, sequentially and unitarily, ionic Dicke states with increasing excitation number. Moreover, we propose a probabilistic technique to produce directly any ionic Dicke state assuming suitable initial conditions.
Preconditioning for first-order spectral discretization
NASA Technical Reports Server (NTRS)
Streett, C. L.; Macaraeg, M. G.
1986-01-01
Efficient solution of the equations from spectral discretizations is essential if the high-order accuracy of these methods is to be realized. Direct solution of these equations is rarely feasible, thus iterative techniques are required. A preconditioning scheme for first-order Chebyshev collocation operators is proposed herein, in which the central finite difference mesh is finer than the collocation mesh. Details of the proper techniques for transferring information between the meshes are given here, and the scheme is analyzed by examination of the eigenvalue spectra of the preconditioned operators. The effect of artificial viscosity required in the inversion of the finite difference operator is examined. A second preconditioning scheme, involving a high-order upwind finite difference operator of the van Leer type is also analyzed to provide a comparison with the present scheme. Finally, the performance of the present scheme is verified by application to several test problems.
Minimal residual method stronger than polynomial preconditioning
Faber, V.; Joubert, W.; Knill, E.
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Decoherence free subspaces of a quantum Markov semigroup
Agredo, Julián; Fagnola, Franco; Rebolledo, Rolando
2014-11-15
We give a full characterisation of decoherence free subspaces of a given quantum Markov semigroup with generator in a generalised Lindbald form which is valid also for infinite-dimensional systems. Our results, extending those available in the literature concerning finite-dimensional systems, are illustrated by some examples.
Computational Complexity of Subspace Detectors and Matched Field Processing
Harris, D B
2010-12-01
Subspace detectors implement a correlation type calculation on a continuous (network or array) data stream [Harris, 2006]. The difference between subspace detectors and correlators is that the former projects the data in a sliding observation window onto a basis of template waveforms that may have a dimension (d) greater than one, and the latter projects the data onto a single waveform template. A standard correlation detector can be considered to be a degenerate (d=1) form of a subspace detector. Figure 1 below shows a block diagram for the standard formulation of a subspace detector. The detector consists of multiple multichannel correlators operating on a continuous data stream. The correlation operations are performed with FFTs in an overlap-add approach that allows the stream to be processed in uniform, consecutive, contiguous blocks. Figure 1 is slightly misleading for a calculation of computational complexity, as it is possible, when treating all channels with the same weighting (as shown in the figure), to perform the indicated summations in the multichannel correlators before the inverse FFTs and to get by with a single inverse FFT and overlap add calculation per multichannel correlator. In what follows, we make this simplification.
Krylov-space algorithms for time-dependent Hartree-Fock and density functional computations
Chernyak, Vladimir; Schulz, Michael F.; Mukamel, Shaul; Tretiak, Sergei; Tsiper, Eugene V.
2000-07-01
A fast, low memory cost, Krylov-space-based algorithm is proposed for the diagonalization of large Hamiltonian matrices required in time-dependent Hartree-Fock (TDHF) and adiabatic time-dependent density-functional theory (TDDFT) computations of electronic excitations. A deflection procedure based on the symplectic structure of the TDHF equations is introduced and its capability to find higher eigenmodes of the linearized TDHF operator for a given numerical accuracy is demonstrated. The algorithm may be immediately applied to the formally-identical adiabatic TDDFT equations. (c) 2000 American Institute of Physics.
Solving Nonlinear Solid Mechanics Problems with the Jacobian-Free Newton Krylov Method
J. D. Hales; S. R. Novascone; R. L. Williamson; D. R. Gaston; M. R. Tonks
2012-06-01
The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian is high. These challenges are magnified for multiphysics applications with many coupled variables. Jacobian-free Newton-Krylov (JFNK) methods avoid many of the difficulties associated with the Jacobian by using a finite difference approximation. BISON is a parallel, object-oriented, nonlinear solid mechanics and multiphysics application that leverages JFNK methods. We overview JFNK, outline the capabilities of BISON, and demonstrate the effectiveness of JFNK for solid mechanics and solid mechanics coupled to other PDEs using a series of demonstration problems.
Using generalized Cayley transformations within an inexact rational Krylov sequence method.
Lehoucq, R. B.; Meerbergen, K.; Mathematics and Computer Science; Utrecht Univ.
1999-01-01
The rational Krylov sequence (RKS) method is a generalization of Arnoldi's method. It constructs an orthogonal reduction of a matrix pencil into an upper Hessenberg pencil. The RKS method is useful when the matrix pencil may be efficiently factored. This article considers approximately solving the resulting linear systems with iterative methods. We show that a Cayley transformation leads to a more efficient and robust eigensolver than the usual shift-invert transformation when the linear systems are solved inexactly within the RKS method. A relationship with the recently introduced Jacobi--Davidson method is also established.
Preconditioned techniques for large eigenvalue problems
NASA Astrophysics Data System (ADS)
Wu, Kesheng
1997-11-01
This research focuses on finding a large number of eigenvalues and eigen-vectors of a sparse symmetric or Hermitian matrix, for example, finding 1000 eigenpairs of a 100,000 × 100,000 matrix. These eigenvalue problems are challenging because the matrix size is too large for traditional QR based algorithms and the number of desired eigenpairs is too large for most common sparse eigenvalue algorithms. In this thesis, we approach this problem in two steps. First, we identify a sound preconditioned eigenvalue procedure for computing multiple eigenpairs. Second, we improve the basic algorithm through new preconditioning schemes and spectrum transformations. Through careful analysis, we see that both the Arnoldi and Davidson methods have an appropriate structure for computing a large number of eigenpairs with preconditioning. We also study three variations of these two basic algorithms. Without preconditioning, these methods are mathematically equivalent but they differ in numerical stability and complexity. However, the Davidson method is much more successful when preconditioned. Despite its success, the preconditioning scheme in the Davidson method is seen as flawed because the preconditioner becomes ill-conditioned near convergence. After comparison with other methods, we find that the effectiveness of the Davidson method is due to its preconditioning step being an inexact Newton method. We proceed to explore other Newton methods for eigenvalue problems to develop preconditioning schemes without the same flaws. We found that the simplest and most effective preconditioner is to use the Conjugate Gradient method to approximately solve equations generated by the Newton methods. Also, a different strategy of enhancing the performance of the Davidson method is to alternate between the regular Davidson iteration and a polynomial method for eigenvalue problems. To use these polynomials, the user must decide which intervals of the spectrum the polynomial should suppress. We
Subspaces indexing model on Grassmann manifold for image search.
Wang, Xinchao; Li, Zhu; Tao, Dacheng
2011-09-01
Conventional linear subspace learning methods like principal component analysis (PCA), linear discriminant analysis (LDA) derive subspaces from the whole data set. These approaches have limitations in the sense that they are linear while the data distribution we are trying to model is typically nonlinear. Moreover, these algorithms fail to incorporate local variations of the intrinsic sample distribution manifold. Therefore, these algorithms are ineffective when applied on large scale datasets. Kernel versions of these approaches can alleviate the problem to certain degree but face a serious computational challenge when data set is large, where the computing involves Eigen/QP problems of size N × N. When N is large, kernel versions are not computationally practical. To tackle the aforementioned problems and improve recognition/searching performance, especially on large scale image datasets, we propose a novel local subspace indexing model for image search termed Subspace Indexing Model on Grassmann Manifold (SIM-GM). SIM-GM partitions the global space into local patches with a hierarchical structure; the global model is, therefore, approximated by piece-wise linear local subspace models. By further applying the Grassmann manifold distance, SIM-GM is able to organize localized models into a hierarchy of indexed structure, and allow fast query selection of the optimal ones for classification. Our proposed SIM-GM enjoys a number of merits: 1) it is able to deal with a large number of training samples efficiently; 2) it is a query-driven approach, i.e., it is able to return an effective local space model, so the recognition performance could be significantly improved; 3) it is a common framework, which can incorporate many learning algorithms. Theoretical analysis and extensive experimental results confirm the validity of this model.
[STRESS AND INFARCT LIMITING EFFECTS OF EARLY HYPOXIC PRECONDITIONING].
Lishmanov, Yu B; Maslov, L N; Sementsov, A S; Naryzhnaya, N V; Tsibulnikov, S Yu
2015-09-01
It was established that early hypoxic preconditioning is an adaptive state different from eustress and distress. Hypoxic preconditioning has the cross effects, increasing the tolerance of the heart to ischemia-reperfusion and providing antiulcerogenic effect during immobilization stress.
Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD
NASA Technical Reports Server (NTRS)
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.
1998-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.
Krylov iterative methods and synthetic acceleration for transport in binary statistical media
Fichtl, Erin D; Warsa, James S; Prinja, Anil K
2008-01-01
In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S{sup 2} synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.
Health and Nutrition: Preconditions for Educational Achievement.
ERIC Educational Resources Information Center
Negussie, Birgit
This paper discusses the importance of maternal and infant health for children's educational achievement. Education, health, and nutrition are so closely related that changes in one causes changes in the others. Improvement of maternal and preschooler health and nutrition is a precondition for improved educational achievement. Although parental…
Revealing Preconditions for Trustful Collaboration in CSCL
ERIC Educational Resources Information Center
Gerdes, Anne
2010-01-01
This paper analyses preconditions for trust in virtual learning environments. The concept of trust is discussed with reference to cases reporting trust in cyberspace and through a philosophical clarification holding that trust in the form of self-surrender is a common characteristic of all human co-existence. In virtual learning environments,…
NASA Astrophysics Data System (ADS)
Hayes, Charles E.; McClellan, James H.; Scott, Waymond R.; Kerr, Andrew J.
2016-05-01
This work introduces two advances in wide-band electromagnetic induction (EMI) processing: a novel adaptive matched filter (AMF) and matched subspace detection methods. Both advances make use of recent work with a subspace SVD approach to separating the signal, soil, and noise subspaces of the frequency measurements The proposed AMF provides a direct approach to removing the EMI self-response while improving the signal to noise ratio of the data. Unlike previous EMI adaptive downtrack filters, this new filter will not erroneously optimize the EMI soil response instead of the EMI target response because these two responses are projected into separate frequency subspaces. The EMI detection methods in this work elaborate on how the signal and noise subspaces in the frequency measurements are ideal for creating the matched subspace detection (MSD) and constant false alarm rate matched subspace detection (CFAR) metrics developed by Scharf The CFAR detection metric has been shown to be the uniformly most powerful invariant detector.
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
Smooth local subspace projection for nonlinear noise reduction
Chelidze, David
2014-03-15
Many nonlinear or chaotic time series exhibit an innate broad spectrum, which makes noise reduction difficult. Local projective noise reduction is one of the most effective tools. It is based on proper orthogonal decomposition (POD) and works for both map-like and continuously sampled time series. However, POD only looks at geometrical or topological properties of data and does not take into account the temporal characteristics of time series. Here, we present a new smooth projective noise reduction method. It uses smooth orthogonal decomposition (SOD) of bundles of reconstructed short-time trajectory strands to identify smooth local subspaces. Restricting trajectories to these subspaces imposes temporal smoothness on the filtered time series. It is shown that SOD-based noise reduction significantly outperforms the POD-based method for continuously sampled noisy time series.
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
A basis in an invariant subspace of analytic functions
Krivosheev, A S; Krivosheeva, O A
2013-12-31
The existence problem for a basis in a differentiation-invariant subspace of analytic functions defined in a bounded convex domain in the complex plane is investigated. Conditions are found for the solvability of a certain special interpolation problem in the space of entire functions of exponential type with conjugate diagrams lying in a fixed convex domain. These underlie sufficient conditions for the existence of a basis in the invariant subspace. This basis consists of linear combinations of eigenfunctions and associated functions of the differentiation operator, whose exponents are combined into relatively small clusters. Necessary conditions for the existence of a basis are also found. Under a natural constraint on the number of points in the groups, these coincide with the sufficient conditions. That is, a criterion is found under this constraint that a basis constructed from relatively small clusters exists in an invariant subspace of analytic functions in a bounded convex domain in the complex plane. Bibliography: 25 titles.
Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment
Khuwaileh, B.A. Abdel-Khalik, H.S.
2015-01-15
Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.
Zhu, Xiaofeng; Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2016-03-01
The high feature-dimension and low sample-size problem is one of the major challenges in the study of computer-aided Alzheimer's disease (AD) diagnosis. To circumvent this problem, feature selection and subspace learning have been playing core roles in the literature. Generally, feature selection methods are preferable in clinical applications due to their ease for interpretation, but subspace learning methods can usually achieve more promising results. In this paper, we combine two different methodological approaches to discriminative feature selection in a unified framework. Specifically, we utilize two subspace learning methods, namely, linear discriminant analysis and locality preserving projection, which have proven their effectiveness in a variety of fields, to select class-discriminative and noise-resistant features. Unlike previous methods in neuroimaging studies that mostly focused on a binary classification, the proposed feature selection method is further applicable for multiclass classification in AD diagnosis. Extensive experiments on the Alzheimer's disease neuroimaging initiative dataset showed the effectiveness of the proposed method over other state-of-the-art methods. PMID:26276982
LESS: a model-based classifier for sparse subspaces.
Veenman, Cor J; Tax, David M J
2005-09-01
In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.
Physiology and pharmacology of myocardial preconditioning.
Raphael, Jacob
2010-03-01
Perioperative myocardial ischemia and infarction are not only major sources of morbidity and mortality in patients undergoing surgery but also important causes of prolonged hospital stay and resource utilization. Ischemic and pharmacological preconditioning and postconditioning have been known for more than two decades to provide protection against myocardial ischemia and reperfusion and limit myocardial infarct size in many experimental animal models, as well as in clinical studies (1-3). This paper will review the physiology and pharmacology of ischemic and drug-induced preconditioning and postconditioning of the myocardium with special emphasis on the mechanisms by which volatile anesthetics provide myocardial protection. Insights gained from animal and clinical studies will be presented and reviewed and recommendations for the use of perioperative anesthetics and medications will be given.
SKRYN: A fast semismooth-Krylov-Newton method for controlling Ising spin systems
NASA Astrophysics Data System (ADS)
Ciaramella, G.; Borzì, A.
2015-05-01
The modeling and control of Ising spin systems is of fundamental importance in NMR spectroscopy applications. In this paper, two computer packages, ReHaG and SKRYN, are presented. Their purpose is to set-up and solve quantum optimal control problems governed by the Liouville master equation modeling Ising spin-1/2 systems with pointwise control constraints. In particular, the MATLAB package ReHaG allows to compute a real matrix representation of the master equation. The MATLAB package SKRYN implements a new strategy resulting in a globalized semismooth matrix-free Krylov-Newton scheme. To discretize the real representation of the Liouville master equation, a norm-preserving modified Crank-Nicolson scheme is used. Results of numerical experiments demonstrate that the SKRYN code is able to provide fast and accurate solutions to the Ising spin quantum optimization problem.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less
M-step preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
Preconditioned conjugate gradient methods for solving sparse symmetric and positive finite systems of linear equations are described. Necessary and sufficient conditions are given for when these preconditioners can be used and an analysis of their effectiveness is given. Efficient computer implementations of these methods are discussed and results on the CYBER 203 and the Finite Element Machine under construction at NASA Langley Research Center are included.
The macrophage mediates the renoprotective effects of endotoxin preconditioning.
Hato, Takashi; Winfree, Seth; Kalakeche, Rabih; Dube, Shataakshi; Kumar, Rakesh; Yoshimoto, Momoko; Plotkin, Zoya; Dagher, Pierre C
2015-06-01
Preconditioning is a preventative approach, whereby minimized insults generate protection against subsequent larger exposures to the same or even different insults. In immune cells, endotoxin preconditioning downregulates the inflammatory response and yet, preserves the ability to contain infections. However, the protective mechanisms of preconditioning at the tissue level in organs such as the kidney remain poorly understood. Here, we show that endotoxin preconditioning confers renal epithelial protection in various models of sepsis in vivo. We also tested the hypothesis that this protection results from direct interactions between the preconditioning dose of endotoxin and the renal tubules. This hypothesis is on the basis of our previous findings that endotoxin toxicity to nonpreconditioned renal tubules was direct and independent of immune cells. Notably, we found that tubular protection after preconditioning has an absolute requirement for CD14-expressing myeloid cells and particularly, macrophages. Additionally, an intact macrophage CD14-TRIF signaling pathway was essential for tubular protection. The preconditioned state was characterized by increased macrophage number and trafficking within the kidney as well as clustering of macrophages around S1 proximal tubules. These macrophages exhibited increased M2 polarization and upregulation of redox and iron-handling molecules. In renal tubules, preconditioning prevented peroxisomal damage and abolished oxidative stress and injury to S2 and S3 tubules. In summary, these data suggest that macrophages are essential mediators of endotoxin preconditioning and required for renal tissue protection. Preconditioning is, therefore, an attractive model to investigate novel protective pathways for the prevention and treatment of sepsis.
On polynomial preconditioning for indefinite Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1989-01-01
The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.
Discriminative sparse subspace learning and its application to unsupervised feature selection.
Zhou, Nan; Cheng, Hong; Pedrycz, Witold; Zhang, Yong; Liu, Huaping
2016-03-01
In order to efficiently use the intrinsic data information, in this study a Discriminative Sparse Subspace Learning (DSSL) model has been investigated for unsupervised feature selection. First, the feature selection problem is formulated as a subspace learning problem. In order to efficiently learn the discriminative subspace, we investigate the discriminative information in the subspace learning process. Second, a two-step TDSSL algorithm and a joint modeling JDSSL algorithm are developed to incorporate the clusters׳ assignment as the discriminative information. Then, a convergence analysis of these two algorithms is provided. A kernelized discriminative sparse subspace learning (KDSSL) method is proposed to handle the nonlinear subspace learning problem. Finally, extensive experiments are conducted on real-world datasets to show the superiority of the proposed approaches over several state-of-the-art approaches. PMID:26803552
Video background tracking and foreground extraction via L1-subspace updates
NASA Astrophysics Data System (ADS)
Pierantozzi, Michele; Liu, Ying; Pados, Dimitris A.; Colonnese, Stefania
2016-05-01
We consider the problem of online foreground extraction from compressed-sensed (CS) surveillance videos. A technically novel approach is suggested and developed by which the background scene is captured by an L1- norm subspace sequence directly in the CS domain. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to outliers, disturbances, and rank selection. Subtraction of the L1-subspace tracked background leads then to effective foreground/moving objects extraction. Experimental studies included in this paper illustrate and support the theoretical developments.
Updating Hawaii Seismicity Catalogs with Systematic Relocations and Subspace Detectors
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Matoza, R. S.; Thelen, W. A.
2015-12-01
We continue the systematic relocation of seismicity recorded in Hawai`i by the United States Geological Survey's (USGS) Hawaiian Volcano Observatory (HVO), with interests in adding to the products derived from the relocated seismicity catalogs published by Matoza et al., (2013, 2014). Another goal of this effort is updating the systematically relocated HVO catalog since 2009, when earthquake cataloging at HVO was migrated to the USGS Advanced National Seismic System Quake Management Software (AQMS) systems. To complement the relocation analyses of the catalogs generated from traditional STA/LTA event-triggered and analyst-reviewed approaches, we are also experimenting with subspace detection of events at Kilauea as a means to augment AQMS procedures for cataloging seismicity to lower magnitudes and during episodes of elevated volcanic activity. Our earlier catalog relocations have demonstrated the ability to define correlated or repeating families of earthquakes and provide more detailed definition of seismogenic structures, as well as the capability for improved automatic identification of diverse volcanic seismic sources. Subspace detectors have been successfully applied to cataloging seismicity in situations of low seismic signal-to-noise and have significantly increased catalog sensitivity to lower magnitude thresholds. We anticipate similar improvements using event subspace detections and cataloging of volcanic seismicity that include improved discrimination among not only evolving earthquake sequences but also diverse volcanic seismic source processes. Matoza et al., 2013, Systematic relocation of seismicity on Hawai`i Island from 1992 to 2009 using waveform cross correlation and cluster analysis, J. Geophys. Res., 118, 2275-2288, doi:10.1002/jgrb.580189 Matoza et al., 2014, High-precision relocation of long-period events beneath the summit region of Kīlauea Volcano, Hawai`i, from 1986 to 2009, Geophys. Res. Lett., 41, 3413-3421, doi:10.1002/2014GL059819
Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.
Robinson, David
2014-12-01
A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218
Matrix preconditioning: a robust operation for optical linear algebra processors.
Ghosh, A; Paparao, P
1987-07-15
Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.
Hypoxic Preconditioning Alleviates Ethanol Neurotoxicity: the Involvement of Autophagy
Wang, Haiping; Bower, Kimberly A.; Frank, Jacqueline A.; Xu, Mei; Luo, Jia
2013-01-01
Ethanol is a neuroteratogen and neurodegeneration is the most devastating consequence of developmental exposure to ethanol. A sublethal preconditioning has been proposed as a neuroprotective strategy against several central nervous system (CNS) neurodegenerative diseases. We have recently demonstrated that autophagy is a protective response to alleviate ethanol toxicity. A modest hypoxic preconditioning (1% oxygen) did not cause neurotoxicity but induced autophagy (Tzeng et al., 2010). We therefore hypothesize that the modest hypoxic preconditioning may offer a protection against ethanol-induced neurotoxicity. We showed here that the modest hypoxic preconditioning (1% oxygen) for 8 hours significantly alleviated ethanol-induced death of SH-SY5Y neuroblastoma cells. Under the normoxia condition, cell viability in ethanol-exposed cultures (316 mg/dl for 48 hrs) was 49 ± 6% of untreated controls; however, with hypoxic preconditioning, cell viability in the ethanol-exposed group increased to 78 ± 7% of the controls (p < 0.05; n = 3). Bafilomycin A1, an inhibitor of autophagosome and lysosome fusion, blocked hypoxic preconditioning-mediated protection. Similarly, inhibition of autophagic initiation by wortmannin also eliminated hypoxic preconditioning-mediated protection. In contrast, activation of autophagy by rapamycin further enhanced neuroprotection caused by hypoxic preconditioning. Taken together, the results confirm that autophagy is a protective response against ethanol neurotoxicity and the modest hypoxic preconditioning can offer neuroprotection by activating autophagic pathways. PMID:23568540
Approximate polynomial preconditioning applied to biharmonic equations on vector supercomputers
NASA Technical Reports Server (NTRS)
Wong, Yau Shu; Jiang, Hong
1987-01-01
Applying a finite difference approximation to a biharmonic equation results in a very ill-conditioned system of equations. This paper examines the conjugate gradient method used in conjunction with the generalized and approximate polynomial preconditionings for solving such linear systems. An approximate polynomial preconditioning is introduced, and is shown to be more efficient than the generalized polynomial preconditionings. This new technique provides a simple but effective preconditioning polynomial, which is based on another coefficient matrix rather than the original matrix operator as commonly used.
Domain-decomposed preconditionings for transport operators
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Gropp, William D.; Keyes, David E.
1991-01-01
The performance was tested of five different interface preconditionings for domain decomposed convection diffusion problems, including a novel one known as the spectral probe, while varying mesh parameters, Reynolds number, ratio of subdomain diffusion coefficients, and domain aspect ratio. The preconditioners are representative of the range of practically computable possibilities that have appeared in the domain decomposition literature for the treatment of nonoverlapping subdomains. It is shown that through a large number of numerical examples that no single preconditioner can be considered uniformly superior or uniformly inferior to the rest, but that knowledge of particulars, including the shape and strength of the convection, is important in selecting among them in a given problem.
H(curl) Auxiliary Mesh Preconditioning
Kolev, T V; Pasciak, J E; Vassilevski, P S
2006-08-31
This paper analyzes a two-level preconditioning scheme for H(curl) bilinear forms. The scheme utilizes an auxiliary problem on a related mesh that is more amenable for constructing optimal order multigrid methods. More specifically, we analyze the case when the auxiliary mesh only approximately covers the original domain. The latter assumption is important since it allows for easy construction of nested multilevel spaces on regular auxiliary meshes. Numerical experiments in both two and three space dimensions illustrate the optimal performance of the method.
Extremely Intense Magnetospheric Substorms : External Triggering? Preconditioning?
NASA Astrophysics Data System (ADS)
Tsurutani, Bruce; Echer, Ezequiel; Hajra, Rajkumar
2016-07-01
We study particularly intense substorms using a variety of near-Earth spacecraft data and ground observations. We will relate the solar cycle dependences of events, determine whether the supersubstorms are externally or internally triggered, and their relationship to other factors such as magnetospheric preconditioning. If time permits, we will explore the details of the events and whether they are similar to regular (Akasofu, 1964) substorms or not. These intense substorms are an important feature of space weather since they may be responsible for power outages.
Towards bulk based preconditioning for quantum dotcomputations
Dongarra, Jack; Langou, Julien; Tomov, Stanimire; Channing,Andrew; Marques, Osni; Vomel, Christof; Wang, Lin-Wang
2006-05-25
This article describes how to accelerate the convergence of Preconditioned Conjugate Gradient (PCG) type eigensolvers for the computation of several states around the band gap of colloidal quantum dots. Our new approach uses the Hamiltonian from the bulk materials constituent for the quantum dot to design an efficient preconditioner for the folded spectrum PCG method. The technique described shows promising results when applied to CdSe quantum dot model problems. We show a decrease in the number of iteration steps by at least a factor of 4 compared to the previously used diagonal preconditioner.
Parallel preconditioning techniques for sparse CG solvers
Basermann, A.; Reichel, B.; Schelthoff, C.
1996-12-31
Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.
LogDet Rank Minimization with Application to Subspace Clustering
Kang, Zhao; Peng, Chong; Cheng, Jie; Cheng, Qiang
2015-01-01
Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms. PMID:26229527
Concurrent subspace width optimization method for RBF neural network modeling.
Yao, Wen; Chen, Xiaoqian; Zhao, Yong; van Tooren, Michel
2012-02-01
Radial basis function neural networks (RBFNNs) are widely used in nonlinear function approximation. One of the challenges in RBFNN modeling is determining how to effectively optimize width parameters to improve approximation accuracy. To solve this problem, a width optimization method, concurrent subspace width optimization (CSWO), is proposed based on a decomposition and coordination strategy. This method decomposes the large-scale width optimization problem into several subspace optimization (SSO) problems, each of which has a single optimization variable and smaller training and validation data sets so as to greatly simplify optimization complexity. These SSOs can be solved concurrently, thus computational time can be effectively reduced. With top-level system coordination, the optimization of SSOs can converge to a consistent optimum, which is equivalent to the optimum of the original width optimization problem. The proposed method is tested with four mathematical examples and one practical engineering approximation problem. The results demonstrate the efficiency and robustness of CSWO in optimizing width parameters over the traditional width optimization methods.
Conformal Laplace superintegrable systems in 2D: polynomial invariant subspaces
NASA Astrophysics Data System (ADS)
Escobar-Ruiz, M. A.; Miller, Willard, Jr.
2016-07-01
2nd-order conformal superintegrable systems in n dimensions are Laplace equations on a manifold with an added scalar potential and 2n-1 independent 2nd order conformal symmetry operators. They encode all the information about Helmholtz (eigenvalue) superintegrable systems in an efficient manner: there is a 1-1 correspondence between Laplace superintegrable systems and Stäckel equivalence classes of Helmholtz superintegrable systems. In this paper we focus on superintegrable systems in two-dimensions, n = 2, where there are 44 Helmholtz systems, corresponding to 12 Laplace systems. For each Laplace equation we determine the possible two-variate polynomial subspaces that are invariant under the action of the Laplace operator, thus leading to families of polynomial eigenfunctions. We also study the behavior of the polynomial invariant subspaces under a Stäckel transform. The principal new results are the details of the polynomial variables and the conditions on parameters of the potential corresponding to polynomial solutions. The hidden gl 3-algebraic structure is exhibited for the exact and quasi-exact systems. For physically meaningful solutions, the orthogonality properties and normalizability of the polynomials are presented as well. Finally, for all Helmholtz superintegrable solvable systems we give a unified construction of one-dimensional (1D) and two-dimensional (2D) quasi-exactly solvable potentials possessing polynomial solutions, and a construction of new 2D PT-symmetric potentials is established.
Preconditioning, postconditioning and their application to clinical cardiology.
Kloner, Robert A; Rezkalla, Shereif H
2006-05-01
Ischemic preconditioning is a well-established phenomenon first described in experimental preparations in which brief episodes of ischemia/reperfusion applied prior to a longer coronary artery occlusion reduce myocardial infarct size. There are ample correlates of ischemic preconditioning in the clinical realm. Preconditioning mimetic agents that stimulate the biochemical pathways of ischemic preconditioning and protect the heart without inducing ischemia have been examined in numerous experimental studies. However, despite the effectiveness of ischemic preconditioning and preconditioning mimetics for protecting ischemic myocardium, there are no preconditioning-based therapies that are routinely used in clinical medicine at the current time. Part of the problem is the need to administer therapy prior to the known ischemic event. Other issues are that percutaneous coronary intervention technology has advanced so far (with the development of stents and drug-eluting stents) that ischemic preconditioning or preconditioning mimetics have not been needed in most interventional cases. Recent clinical trials such as AMISTAD I and II (Acute Myocardial Infarction STudy of ADenosine) suggest that some preconditioning mimetics may reduce myocardial infarct size when given along with reperfusion or, as in the IONA trial, have benefit on clinical events when administered chronically in patients with known coronary artery disease. It is possible that some of the benefit described for adenosine in the AMISTAD 1 and 2 trials represents a manifestation of the recently described postconditioning phenomenon. It is probable that postconditioning--in which reperfusion is interrupted with brief coronary occlusions and reperfusion sequences--is more likely than preconditioning to be feasible as a clinical application to patients undergoing percutaneous coronary intervention for acute myocardial infarction. PMID:16516180
ERIC Educational Resources Information Center
Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.
2011-01-01
This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
[STRESS AND INFARCT LIMITING EFFECTS OF EARLY HYPOXIC PRECONDITIONING].
Lishmanov, Yu B; Maslov, L N; Sementsov, A S; Naryzhnaya, N V; Tsibulnikov, S Yu
2015-09-01
It was established that early hypoxic preconditioning is an adaptive state different from eustress and distress. Hypoxic preconditioning has the cross effects, increasing the tolerance of the heart to ischemia-reperfusion and providing antiulcerogenic effect during immobilization stress. PMID:26672158
40 CFR 86.1232-96 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.1232-96... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) Evaporative... Methanol-Fueled Heavy-Duty Vehicles § 86.1232-96 Vehicle preconditioning. (a) Fuel tank cap(s) of...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Prepare the vehicle for testing as described in 40 CFR 86.131. (b) If testing will include measurement of refueling emissions, perform the vehicle preconditioning steps as described in 40 CFR 86.153. Otherwise, perform the vehicle preconditioning steps as described in 40 CFR 86.132....
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Prepare the vehicle for testing as described in 40 CFR 86.131. (b) If testing will include measurement of refueling emissions, perform the vehicle preconditioning steps as described in 40 CFR 86.153. Otherwise, perform the vehicle preconditioning steps as described in 40 CFR 86.132....
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective is to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.
A Jacobian-free Newton Krylov method for mortar-discretized thermomechanical contact problems
NASA Astrophysics Data System (ADS)
Hansen, Glen
2011-07-01
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear fuel rod, which consists of cylindrical pellets of uranium dioxide (UO 2) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. The accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.
Parallel iterative methods for sparse linear and nonlinear equations
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.
Inverse transport calculations in optical imaging with subspace optimization algorithms
Ding, Tian Ren, Kui
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
Universal quantum computation in waveguide QED using decoherence free subspaces
NASA Astrophysics Data System (ADS)
Paulisch, V.; Kimble, H. J.; González-Tudela, A.
2016-04-01
The interaction of quantum emitters with one-dimensional photon-like reservoirs induces strong and long-range dissipative couplings that give rise to the emergence of the so-called decoherence free subspaces (DFSs) which are decoupled from dissipation. When introducing weak perturbations on the emitters, e.g., driving, the strong collective dissipation enforces an effective coherent evolution within the DFS. In this work, we show explicitly how by introducing single-site resolved drivings, we can use the effective dynamics within the DFS to design a universal set of one and two-qubit gates within the DFS of an ensemble of two-level atom-like systems. Using Liouvillian perturbation theory we calculate the scaling with the relevant figures of merit of the systems, such as the Purcell factor and imperfect control of the drivings. Finally, we compare our results with previous proposals using atomic Λ systems in leaky cavities.
Discriminative Non-Linear Stationary Subspace Analysis for Video Classification.
Baktashmotlagh, Mahsa; Harandi, Mehrtash; Lovell, Brian C; Salzmann, Mathieu
2014-12-01
Low-dimensional representations are key to the success of many video classification algorithms. However, the commonly-used dimensionality reduction techniques fail to account for the fact that only part of the signal is shared across all the videos in one class. As a consequence, the resulting representations contain instance-specific information, which introduces noise in the classification process. In this paper, we introduce non-linear stationary subspace analysis: a method that overcomes this issue by explicitly separating the stationary parts of the video signal (i.e., the parts shared across all videos in one class), from its non-stationary parts (i.e., the parts specific to individual videos). Our method also encourages the new representation to be discriminative, thus accounting for the underlying classification problem. We demonstrate the effectiveness of our approach on dynamic texture recognition, scene classification and action recognition. PMID:26353144
A fast, preconditioned conjugate gradient Toeplitz solver
NASA Technical Reports Server (NTRS)
Pan, Victor; Schrieber, Robert
1989-01-01
A simple factorization is given of an arbitrary hermitian, positive definite matrix in which the factors are well-conditioned, hermitian, and positive definite. In fact, given knowledge of the extreme eigenvalues of the original matrix A, an optimal improvement can be achieved, making the condition numbers of each of the two factors equal to the square root of the condition number of A. This technique is to applied to the solution of hermitian, positive definite Toeplitz systems. Large linear systems with hermitian, positive definite Toeplitz matrices arise in some signal processing applications. A stable fast algorithm is given for solving these systems that is based on the preconditioned conjugate gradient method. The algorithm exploits Toeplitz structure to reduce the cost of an iteration to O(n log n) by applying the fast Fourier Transform to compute matrix-vector products. Matrix factorization is used as a preconditioner.
Hyperbaric oxygen preconditioning protects rats against CNS oxygen toxicity.
Arieli, Yehuda; Kotler, Doron; Eynan, Mirit; Hochman, Ayala
2014-06-15
We examined the hypothesis that repeated exposure to non-convulsive hyperbaric oxygen (HBO) as preconditioning provides protection against central nervous system oxygen toxicity (CNS-OT). Four groups of rats were used in the study. Rats in the control and the negative control (Ctl-) groups were kept in normobaric air. Two groups of rats were preconditioned to non-convulsive HBO at 202 kPa for 1h once every other day for a total of three sessions. Twenty-four hours after preconditioning, one of the preconditioned groups and the control rats were exposed to convulsive HBO at 608 kPa, and latency to CNS-OT was measured. Ctl- rats and the second preconditioned group (PrC-) were not subjected to convulsive HBO exposure. Tissues harvested from the hippocampus and frontal cortex were evaluated for enzymatic activity and nitrotyrosine levels. In the group exposed to convulsive oxygen at 608 kPa, latency to CNS-OT increased from 12.8 to 22.4 min following preconditioning. A significant decrease in the activity of glutathione reductase and glucose-6-phosphate dehydrogenase, and a significant increase in glutathione peroxidase activity, was observed in the hippocampus of preconditioned rats. Nitrotyrosine levels were significantly lower in the preconditioned animals, the highest level being observed in the control rats. In the cortex of the preconditioned rats, a significant increase was observed in glutathione S-transferase and glutathione peroxidase activity. Repeated exposure to non-convulsive HBO provides protection against CNS-OT. The protective mechanism involves alterations in the enzymatic activity of the antioxidant system and lower levels of peroxynitrite, mainly in the hippocampus.
The Influence of Diabetes Mellitus in Myocardial Ischemic Preconditioning
Rezende, Paulo Cury; Rahmi, Rosa Maria
2016-01-01
Ischemic preconditioning (IP) is a powerful mechanism of protection discovered in the heart in which ischemia paradoxically protects the myocardium against other ischemic insults. Many factors such as diseases and medications may influence IP expression. Although diabetes poses higher cardiovascular risk, the physiopathology underlying this condition is uncertain. Moreover, although diabetes is believed to alter intracellular pathways related to myocardial protective mechanisms, it is still controversial whether diabetes may interfere with ischemic preconditioning and whether this might influence clinical outcomes. This review article looks at published reports with animal models and humans that tried to evaluate the possible influence of diabetes in myocardial ischemic preconditioning.
Preconditioning principles for preventing sports injuries in adolescents and children.
Dollard, Mark D; Pontell, David; Hallivis, Robert
2006-01-01
Preseason preconditioning can be accomplished well over a 4-week period with a mandatory period of rest as we have discussed. Athletic participation must be guided by a gradual increase of skills performance in the child assessed after a responsible preconditioning program applying physiologic parameters as outlined. Clearly, designing a preconditioning program is a dynamic process when accounting for all the variables in training discussed so far. Despite the physiologic demands of sport and training, we still need to acknowledge the psychologic maturity and welfare of the child so as to ensure that the sport environment is a wholesome and emotionally rewarding experience.
The Influence of Diabetes Mellitus in Myocardial Ischemic Preconditioning
Rezende, Paulo Cury; Rahmi, Rosa Maria
2016-01-01
Ischemic preconditioning (IP) is a powerful mechanism of protection discovered in the heart in which ischemia paradoxically protects the myocardium against other ischemic insults. Many factors such as diseases and medications may influence IP expression. Although diabetes poses higher cardiovascular risk, the physiopathology underlying this condition is uncertain. Moreover, although diabetes is believed to alter intracellular pathways related to myocardial protective mechanisms, it is still controversial whether diabetes may interfere with ischemic preconditioning and whether this might influence clinical outcomes. This review article looks at published reports with animal models and humans that tried to evaluate the possible influence of diabetes in myocardial ischemic preconditioning. PMID:27656659
40 CFR 1065.516 - Sample system decontamination and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Cycles § 1065.516 Sample system decontamination and preconditioning. This section describes how to manage... purified air or nitrogen. (3) When calculating zero emission levels, apply all applicable...
Preconditioning methods for improved convergence rates in iterative reconstructions
Clinthorne, N.H.; Chiao, Pingchun; Rogers, W.L. . Div. of Nuclear Medicine); Pan, T.S. . Dept. of Nuclear Medicine); Stamos, J.A. . Dept. of Nuclear Engineering)
1993-03-01
Because of the characteristics of the tomographic inversion problem, iterative reconstruction techniques often suffer from poor convergence rates--especially at high spatial frequencies. By using preconditioning methods, the convergence properties of most iterative methods can be greatly enhanced without changing their ultimate solution. To increase reconstruction speed, the authors have applied spatially-invariant preconditioning filters that can be designed using the tomographic system response and implemented using 2-D frequency-domain filtering techniques. In a sample application, the authors performed reconstructions from noiseless, simulated projection data, using preconditioned and conventional steepest-descent algorithms. The preconditioned methods demonstrated residuals that were up to a factor of 30 lower than the unassisted algorithms at the same iteration. Applications of these methods to regularized reconstructions from projection data containing Poisson noise showed similar, although not as dramatic, behavior.
Progress in Parallel Schur Complement Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
We consider preconditioning methods for nonself-adjoint advective-diffusive systems based on a non-overlapping Schur complement procedure for arbitrary triangulated domains. The ultimate goal of this research is to develop scalable preconditioning algorithms for fluid flow discretizations on parallel computing architectures. In our implementation of the Schur complement preconditioning technique, the triangulation is first partitioned into a number of subdomains using the METIS multi-level k-way partitioning code. This partitioning induces a natural 2X2 partitioning of the p.d.e. discretization matrix. By considering various inverse approximations of the 2X2 system, we have developed a family of robust preconditioning techniques. A computer code based on these ideas has been developed and tested on the IBM SP2 and the SGI Power Challenge array using MPI message passing protocol. A number of example CFD calculations will be presented to illustrate and assess various Schur complement approximations.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; Bremer, P. -T.; Pascucci, V.
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; Bremer, P. -T.; Pascucci, V.
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.; Bremer, Peer -Timo; Pascucci, Valerio
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces
Chen, B Y; Lemmond, T D; Hanley, W G
2008-10-06
This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method. PMID:16566463
Universal quantum computation in decoherence-free subspaces with hot trapped ions
Aolita, Leandro; Davidovich, Luiz; Kim, Kihwan; Haeffner, Hartmut
2007-05-15
We consider interactions that generate a universal set of quantum gates on logical qubits encoded in a collective-dephasing-free subspace, and discuss their implementations with trapped ions. This allows for the removal of the by-far largest source of decoherence in current trapped-ion experiments, collective dephasing. In addition, an explicit parametrization of all two-body Hamiltonians able to generate such gates without the system's state ever exiting the protected subspace is provided.
Supervised orthogonal discriminant subspace projects learning for face recognition.
Chen, Yu; Xu, Xiao-Hong
2014-02-01
In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP.
Independent vector analysis using subband and subspace nonlinearity
NASA Astrophysics Data System (ADS)
Na, Yueyue; Yu, Jian; Chai, Bianfang
2013-12-01
Independent vector analysis (IVA) is a recently proposed technique, an application of which is to solve the frequency domain blind source separation problem. Compared with the traditional complex-valued independent component analysis plus permutation correction approach, the largest advantage of IVA is that the permutation problem is directly addressed by IVA rather than resorting to the use of an ad hoc permutation resolving algorithm after a separation of the sources in multiple frequency bands. In this article, two updates for IVA are presented. First, a novel subband construction method is introduced, IVA will be conducted in subbands from high frequency to low frequency rather than in the full frequency band, the fact that the inter-frequency dependencies in subbands are stronger allows a more efficient approach to the permutation problem. Second, to improve robustness and against noise, the IVA nonlinearity is calculated only in the signal subspace, which is defined by the eigenvector associated with the largest eigenvalue of the signal correlation matrix. Different experiments were carried out on a software suite developed by us, and dramatic performance improvements were observed using the proposed methods. Lastly, as an example of real-world application, IVA with the proposed updates was used to separate vibration components from high-speed train noise data.
Steganalysis in high dimensions: fusing classifiers built on random subspaces
NASA Astrophysics Data System (ADS)
Kodovský, Jan; Fridrich, Jessica
2011-02-01
By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art.
Progressive band processing of orthogonal subspace projection in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Li, Yao; Gao, Cheng; Song, Meiping; Chang, Chein-I.
2015-05-01
Progressive band processing (PBP) processes data band by band according to the Band SeQuential (BSQ) format acquired by a hyperspectral imaging sensor. It can be implemented in real time in the sense that data processing can be performed whenever bands are available without waiting for data completely collected. This is particularly important for satellite communication when data download is limited by bandwidth and transmission. This paper presents a new concept of processing a well-known technique, Orthogonal Subspace Projection (OSP) band by band, to be called PBPOSP. Several benefits can be gained by PBP-OSP. One is band processing capability which allows different receiving ends to process data whenever bands are available. Second, it enables users to identify significant bands during data processing. Third, unlike band selection which requires knowing the number of bands needed to be selected or band prioritization PBP-OSP can process arbitrary bands in real time with no need of such prior knowledge. Most importantly, PBP can locate and identify which bands are significant for data processing in a progressive manner. Such progressive profile resulting from PBP-OSP is the best advantage that PBP-OSP can offer and cannot be accomplished by any other OSP-like operators.
Removing Ocular Movement Artefacts by a Joint Smoothened Subspace Estimator
Phlypo, Ronald; Boon, Paul; D'Asseler, Yves; Lemahieu, Ignace
2007-01-01
To cope with the severe masking of background cerebral activity in the electroencephalogram (EEG) by ocular movement artefacts, we present a method which combines lower-order, short-term and higher-order, long-term statistics. The joint smoothened subspace estimator (JSSE) calculates the joint information in both statistical models, subject to the constraint that the resulting estimated source should be sufficiently smooth in the time domain (i.e., has a large autocorrelation or self predictive power). It is shown that the JSSE is able to estimate a component from simulated data that is superior with respect to methodological artefact suppression to those of FastICA, SOBI, pSVD, or JADE/COM1 algorithms used for blind source separation (BSS). Interference and distortion suppression are of comparable order when compared with the above-mentioned methods. Results on patient data demonstrate that the method is able to suppress blinking and saccade artefacts in a fully automated way. PMID:18288258
Speciesism as a precondition to justice.
Barilan, Y Michael
2004-03-01
Over and above fairness, the concept of justice presupposes that in any community no one member's wellbeing or life plan is inexorably dependent on the consumption or exploitation of other members. Renunciation of such use of others constitutes moral sociability, without which moral considerability is useless and possibly meaningless. To know if a creature is morally sociable, we must know it in its community; we must know its ecological profile, its species. Justice can be blind to species no more than to circumstance. Speciesism, the recognition of rights on the basis of group membership rather than solely on the basis of moral considerations at the level of the individual creature, embodies this assertion but is often described as a variant of Nazi racism. I consider this description and find it unwarranted, most obviously because Nazi racism extolled the stronger and the abuser and condemned the weaker and the abused, be they species or individuals, humans or animals. To the contrary, I present an argument for speciesism as a precondition to justice.
Preconditioning stimuli that augment chromaffin cell secretion.
Tapia, Laura; García-Eguiagaray, Josefina; García, Antonio G; Gandía, Luis
2009-04-01
We have investigated here whether a preconditioned stimulation of nicotinic and muscarinic receptors augmented the catecholamine release responses elicited by supramaximal 3-s pulses of 100 muM acetylcholine (100ACh) or 100 mM K(+) (100K(+)) applied to fast-perifused bovine adrenal chromaffin cells. Threshold concentrations of nicotine (1-3 muM) that caused only a tiny secretion did, however, augment the responses elicited by 100ACh or 100K(+) by 2- to 3.5-fold. This effect was suppressed by mecamylamine and by Ca(2+) deprivation, was developed with a half-time (t(1/2)) of 1 min, and was reversible. The nicotine effect was mimicked by threshold concentrations of ACh, choline, epibatidine, and oxotremorine-M but not by methacholine. Threshold concentrations of K(+) caused lesser potentiation of secretion compared with that of threshold nicotine. The data are compatible with an hypothesis implying 1) that continuous low-frequency sympathetic discharge places chromaffin cells at the adrenal gland in a permanent "hypersensitive" state; and 2) this allows an explosive secretion of catecholamines by high-frequency sympathetic discharge during stress.
Responsive corneosurfametry following in vivo skin preconditioning.
Uhoda, E; Goffin, V; Pierard, G E
2003-12-01
Skin is subjected to many environmental threats, some of which altering the structure and function of the stratum corneum. Among them, surfactants are recognized factors that may influence irritant contact dermatitis. The present study was conducted to compare the variations in skin capacitance and corneosurfametry (CSM) reactivity before and after skin exposure to repeated subclinical injuries by 2 hand dishwashing liquids. A forearm immersion test was performed on 30 healthy volunteers. 2 daily soak sessions were performed for 5 days. At inclusion and the day following the last soak session, skin capacitance was measured and cyanoacrylate skin-surface strippings were harvested. The latter specimens were used for the ex vivo microwave CSM. Both types of assessments clearly differentiated the 2 hand dishwashing liquids. The forearm immersion test allowed the discriminant sensitivity of CSM to increase. Intact skin capacitance did not predict CSM data. By contrast, a significant correlation was found between the post-test conductance and the corresponding CSM data. In conclusion, a forearm immersion test under realistic conditions can discriminate the irritation potential between surfactant-based products by measuring skin conductance and performing CSM. In vivo skin preconditioning by surfactants increases CSM sensitivity to the same surfactants. PMID:15025702
A Weakest Precondition Approach to Robustness
NASA Astrophysics Data System (ADS)
Balliu, Musard; Mastroeni, Isabella
With the increasing complexity of information management computer systems, security becomes a real concern. E-government, web-based financial transactions or military and health care information systems are only a few examples where large amount of information can reside on different hosts distributed worldwide. It is clear that any disclosure or corruption of confidential information in these contexts can result fatal. Information flow controls constitute an appealing and promising technology to protect both data confidentiality and data integrity. The certification of the security degree of a program that runs in untrusted environments still remains an open problem in the area of language-based security. Robustness asserts that an active attacker, who can modify program code in some fixed points (holes), is unable to disclose more private information than a passive attacker, who merely observes unclassified data. In this paper, we extend a method recently proposed for checking declassified non-interference in presence of passive attackers only, in order to check robustness by means of weakest precondition semantics. In particular, this semantics simulates the kind of analysis that can be performed by an attacker, i.e., from public output towards private input. The choice of semantics allows us to distinguish between different attacks models and to characterize the security of applications in different scenarios.
Responsive corneosurfametry following in vivo skin preconditioning.
Uhoda, E; Goffin, V; Pierard, G E
2003-12-01
Skin is subjected to many environmental threats, some of which altering the structure and function of the stratum corneum. Among them, surfactants are recognized factors that may influence irritant contact dermatitis. The present study was conducted to compare the variations in skin capacitance and corneosurfametry (CSM) reactivity before and after skin exposure to repeated subclinical injuries by 2 hand dishwashing liquids. A forearm immersion test was performed on 30 healthy volunteers. 2 daily soak sessions were performed for 5 days. At inclusion and the day following the last soak session, skin capacitance was measured and cyanoacrylate skin-surface strippings were harvested. The latter specimens were used for the ex vivo microwave CSM. Both types of assessments clearly differentiated the 2 hand dishwashing liquids. The forearm immersion test allowed the discriminant sensitivity of CSM to increase. Intact skin capacitance did not predict CSM data. By contrast, a significant correlation was found between the post-test conductance and the corresponding CSM data. In conclusion, a forearm immersion test under realistic conditions can discriminate the irritation potential between surfactant-based products by measuring skin conductance and performing CSM. In vivo skin preconditioning by surfactants increases CSM sensitivity to the same surfactants.
A multiple constrained signal subspace projection for target detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Chang, Lena; Wu, Yen-Ting; Tang, Zay-Shing; Chang, Yang-Lang
2015-05-01
In the study, we develop a multiple constrained signal subspace projection (SSP) approach to target detection. Instead of using single constraint on target detection, we design an optimal filter with multiple constraints on desired targets by using SSP. The proposed SSP approach fully exploits the orthogonal property of two orthogonal subspaces: one denoted signal subspace containing desired and undesired/background targets; the other denoted noise subspace, which is orthogonal to signal subspace. By projecting the weights of the detection filter on the signal subspace, the proposed SSP can reduces some estimation errors in target signatures and alleviate the performance degradation caused by uncertainty of target signature. The SSP approach can detect desired targets, suppress undesired targets and minimize the interference effects. In experiments, we provide three methods in selecting multiple constraints of the desired target: Kmeans, principal eigenvectors and endmenber extracting techniques. Simulation results show that the proposed SSP with multiple constraints selected by K-means has better detection performance. Furthermore, the proposed SSP with multiple constraints is a robust detection approach which could overcome the uncertainty of desired target signature in real image data.
Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size
NASA Astrophysics Data System (ADS)
Shaghaghi, Mahdi; Vorobyov, Sergiy A.
2015-06-01
Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.
Subspace learning of dynamics on a shape manifold: a generative modeling approach.
Yi, Sheng; Krim, Hamid
2014-11-01
In this paper, we propose a novel subspace learning algorithm of shape dynamics. Compared to the previous works, our method is invertible and better characterizes the nonlinear geometry of a shape manifold while retaining a good computational efficiency. In this paper, using a parallel moving frame on a shape manifold, each path of shape dynamics is uniquely represented in a subspace spanned by the moving frame, given an initial condition (the starting point and starting frame). Mathematically, such a representation may be formulated as solving a manifold-valued differential equation, which provides a generative modeling of high-dimensional shape dynamics in a lower dimensional subspace. Given the parallelism and a path on a shape manifold, the parallel moving frame along the path is uniquely determined up to the choice of the starting frame. With an initial frame, we minimize the reconstruction error from the subspace to shape manifold. Such an optimization characterizes well the Riemannian geometry of the manifold by imposing parallelism (equivalent as a Riemannian metric) constraints on the moving frame. The parallelism in this paper is defined by a Levi-Civita connection, which is consistent with the Riemannian metric of the shape manifold. In the experiments, the performance of the subspace learning is extensively evaluated using two scenarios: 1) how the high dimensional geometry is characterized in the subspace and 2) how the reconstruction compares with the original shape dynamics. The results demonstrate and validate the theoretical advantages of the proposed approach. PMID:25248183
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Dana A. Knoll; H. Park; Kord Smith
2011-02-01
The use of the Jacobian-free Newton-Krylov (JFNK) method within the context of nonlinear diffusion acceleration (NDA) of source iteration is explored. The JFNK method is a synergistic combination of Newton's method as the nonlinear solver and Krylov methods as the linear solver. JFNK methods do not form or store the Jacobian matrix, and Newton's method is executed via probing the nonlinear discrete function to approximate the required matrix-vector products. Current application of NDA relies upon a fixed-point, or Picard, iteration to resolve the nonlinearity. We show that the JFNK method can be used to replace this Picard iteration with a Newton iteration. The Picard linearization is retained as a preconditioner. We show that the resulting JFNK-NDA capability provides benefit in some regimes. Furthermore, we study the effects of a two-grid approach, and the required intergrid transfers when the higher-order transport method is solved on a fine mesh compared to the low-order acceleration problem.
Reduction in postsystolic wall thickening during late preconditioning.
Monnet, Xavier; Lucats, Laurence; Colin, Patrice; Derumeaux, Geneviève; Dubois-Rande, Jean-Luc; Hittinger, Luc; Ghaleh, Bijan; Berdeaux, Alain
2007-01-01
Brief coronary artery occlusion (CAO) and reperfusion induce myocardial stunning and late preconditioning. Postsystolic wall thickening (PSWT) also develops with CAO and reperfusion. However, the time course of PSWT during stunning and the regional function pattern of the preconditioned myocardium remain unknown. The goal of this study was to investigate the evolution of PSWT during myocardial stunning and its modifications during late preconditioning. Dogs were chronically instrumented to measure (sonomicrometry) systolic wall thickening (SWT), PSWT, total wall thickening (TWT = SWT + PSWT), and maximal rate of thickening (dWT/dt(max)). Two 10-min CAO (circumflex artery) were performed 24 h apart (day 0 and day 1, n = 7). At day 0, CAO decreased SWT and increased PSWT. During the first hours of the subsequent stunning, evolution of PSWT was symmetrical to that of SWT. At day 1, baseline SWT was similar to day 0, but PSWT was reduced (-66%), while dWT/dt(max) and SWT/TWT ratio increased (+48 and +14%, respectively). After CAO at day 1, stunning was reduced, indicating late preconditioning. Simultaneously vs. day 0, PSWT was significantly reduced, and dWT/dt(max) as well as SWT/TWT ratio were increased, i.e., a greater part of TWT was devoted to ejection. Similar decrease in PSWT was observed with a nonischemic preconditioning stimulus (rapid ventricular pacing, n = 4). In conclusion, a major contractile adaptation occurs during late preconditioning, i.e., the rate of wall thickening is enhanced and PWST is almost abolished. These phenotype adaptations represent potential approaches for characterizing stunning and late preconditioning with repetitive ischemia in humans.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties
Sila, Andrew M.; Shepherd, Keith D.; Pokhariyal, Ganesh P.
2016-01-01
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky–Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries. PMID:27110048
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Heat shock proteins, end effectors of myocardium ischemic preconditioning?
Guisasola, María Concepcion; Desco, Maria del Mar; Gonzalez, Fernanda Silvana; Asensio, Fernando; Dulin, Elena; Suarez, Antonio; Garcia Barreno, Pedro
2006-01-01
The purpose of this study was to investigate (1) whether ischemia-reperfusion increased the content of heat shock protein 72 (Hsp72) transcripts and (2) whether myocardial content of Hsp72 is increased by ischemic preconditioning so that they can be considered as end effectors of preconditioning. Twelve male minipigs (8 protocol, 4 sham) were used, with the following ischemic preconditioning protocol: 3 ischemia and reperfusion 5-minute alternative cycles and last reperfusion cycle of 3 hours. Initial and final transmural biopsies (both in healthy and ischemic areas) were taken in all animals. Heat shock protein 72 messenger ribonucleic acid (mRNA) expression was measured by a semiquantitative reverse transcriptase-polymerase chain reaction (RT-PCR) method using complementary DNA normalized against the housekeeping gene cyclophilin. The identification of heat shock protein 72 was performed by immunoblot. In our “classic” preconditioning model, we found no changes in mRNA hsp72 levels or heat shock protein 72 content in the myocardium after 3 hours of reperfusion. Our experimental model is valid and the experimental techniques are appropriate, but the induction of heat shock proteins 72 as end effectors of cardioprotection in ischemic preconditioning does not occur in the first hours after ischemia, but probably at least 24 hours after it, in the so-called “second protection window.” PMID:17009598
Subspace-Based Holistic Registration for Low-Resolution Facial Images
NASA Astrophysics Data System (ADS)
Boom, B. J.; Spreeuwers, L. J.; Veldhuis, R. N. J.
2010-12-01
Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration.
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Parks, Geoffrey T.; Chen, Xiaoqian; Seshadri, Pranay
2016-03-01
Uncertainty quantification has recently been receiving much attention from aerospace engineering community. With ever-increasing requirements for robustness and reliability, it is crucial to quantify multidisciplinary uncertainty in satellite system design which dominates overall design direction and cost. However, coupled multi-disciplines and cross propagation hamper the efficiency and accuracy of high-dimensional uncertainty analysis. In this study, an uncertainty quantification methodology based on active subspaces is established for satellite conceptual design. The active subspace effectively reduces the dimension and measures the contributions of input uncertainties. A comprehensive characterization of associated uncertain factors is made and all subsystem models are built for uncertainty propagation. By integrating a system decoupling strategy, the multidisciplinary uncertainty effect is efficiently represented by a one-dimensional active subspace for each design. The identified active subspace is checked by bootstrap resampling for confidence intervals and verified by Monte Carlo propagation for the accuracy. To show the performance of active subspaces, 18 uncertainty parameters of an Earth observation small satellite are exemplified and then another 5 design uncertainties are incorporated. The uncertainties that contribute the most to satellite mass and total cost are ranked, and the quantification of high-dimensional uncertainty is achieved by a relatively small number of support samples. The methodology with considerably less cost exhibits high accuracy and strong adaptability, which provides a potential template to tackle multidisciplinary uncertainty in practical satellite systems.
Improved Subspace Estimation for Low-Rank Model-Based Accelerated Cardiac Imaging
Hitchens, T. Kevin; Wu, Yijen L.; Ho, Chien; Liang, Zhi-Pei
2014-01-01
Sparse sampling methods have emerged as effective tools to accelerate cardiac magnetic resonance imaging (MRI). Low-rank model-based cardiac imaging uses a pre-determined temporal subspace for image reconstruction from highly under-sampled (k, t)-space data and has been demonstrated effective for high-speed cardiac MRI. The accuracy of the temporal subspace is a key factor in these methods, yet little work has been published on data acquisition strategies to improve subspace estimation. This paper investigates the use of non-Cartesian k-space trajectories to replace the Cartesian trajectories which are omnipresent but are highly sensitive to readout direction. We also propose “self-navigated” pulse sequences which collect both navigator data (for determining the temporal subspace) and imaging data after every RF pulse, allowing for even greater acceleration. We investigate subspace estimation strategies through analysis of phantom images and demonstrate in vivo cardiac imaging in rats and mice without the use of ECG or respiratory gating. The proposed methods achieved 3-D imaging of wall motion, first-pass myocardial perfusion, and late gadolinium enhancement in rats at 74 frames per second (fps), as well as 2-D imaging of wall motion in mice at 97 fps. PMID:24801352
Face Recognition Using Sparse Representation-Based Classification on K-Nearest Subspace
Mi, Jian-Xun; Liu, Jin-Xing
2013-01-01
The sparse representation-based classification (SRC) has been proven to be a robust face recognition method. However, its computational complexity is very high due to solving a complex -minimization problem. To improve the calculation efficiency, we propose a novel face recognition method, called sparse representation-based classification on k-nearest subspace (SRC-KNS). Our method first exploits the distance between the test image and the subspace of each individual class to determine the nearest subspaces and then performs SRC on the selected classes. Actually, SRC-KNS is able to reduce the scale of the sparse representation problem greatly and the computation to determine the nearest subspaces is quite simple. Therefore, SRC-KNS has a much lower computational complexity than the original SRC. In order to well recognize the occluded face images, we propose the modular SRC-KNS. For this modular method, face images are partitioned into a number of blocks first and then we propose an indicator to remove the contaminated blocks and choose the nearest subspaces. Finally, SRC is used to classify the occluded test sample in the new feature space. Compared to the approach used in the original SRC work, our modular SRC-KNS can greatly reduce the computational load. A number of face recognition experiments show that our methods have five times speed-up at least compared to the original SRC, while achieving comparable or even better recognition rates. PMID:23555671
Energy-Conservative Newton-Krylov Implicit Solver for the Fokker-Planck Equation
NASA Astrophysics Data System (ADS)
Knoll, D. A.; Barnes, D. C.; Chacón, L.
1998-11-01
Energy conservation in 1D Fokker-Planck problems has been addressed by Epperlein,(Epperlein, J. Comp. Phys.), 112, 291-297 (1994) who proposed an implicit method that preserves energy exactly for any time step, provided the energy moment cancels exactly. Although this method can be generalized for several dimensions, standard discretization techniques in multidimensional geometries generally do not guarantee the numerical cancellation of the energy moment, hence precluding exact energy conservation. Furthermore, its numerical implementation is non-trivial, as it involves a dense, non-symmetric matrix of coefficients. It is the objective of this poster to describe the implementation of an implicit energy-conservative scheme for multidimensional Fokker-Planck problems. A new discretization procedure that ensures the numerical cancellation of the energy moment will be discussed. The dense algebraic problem that results from this formulation is solved efficiently by the multigrid preconditioned matrix-free GMRES(Saad, Schultz, SIAM J. Scientific and Stat. Comp.), 7, 856-869 (1986) iterative technique, which minimizes storage and runtime requirements, and allows implicit time steps of the order of the collisional time scale of the problem, τ. Results will show that the method preserves particles and energy exactly.
On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland W.
1992-01-01
The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.
Gpu Implementation of Preconditioning Method for Low-Speed Flows
NASA Astrophysics Data System (ADS)
Zhang, Jiale; Chen, Hongquan
2016-06-01
An improved preconditioning method for low-Mach-number flows is implemented on a GPU platform. The improved preconditioning method employs the fluctuation of the fluid variables to weaken the influence of accuracy caused by the truncation error. The GPU parallel computing platform is implemented to accelerate the calculations. Both details concerning the improved preconditioning method and the GPU implementation technology are described in this paper. Then a set of typical low-speed flow cases are simulated for both validation and performance analysis of the resulting GPU solver. Numerical results show that dozens of times speedup relative to a serial CPU implementation can be achieved using a single GPU desktop platform, which demonstrates that the GPU desktop can serve as a cost-effective parallel computing platform to accelerate CFD simulations for low-Speed flows substantially.
Operator-Based Preconditioning of Stiff Hyperbolic Systems
Reynolds, Daniel R.; Samtaney, Ravi; Woodward, Carol S.
2009-02-09
We introduce an operator-based scheme for preconditioning stiff components encoun- tered in implicit methods for hyperbolic systems of partial differential equations posed on regular grids. The method is based on a directional splitting of the implicit operator, followed by a char- acteristic decomposition of the resulting directional parts. This approach allows for solution to any number of characteristic components, from the entire system to only the fastest, stiffness-inducing waves. We apply the preconditioning method to stiff hyperbolic systems arising in magnetohydro- dynamics and gas dynamics. We then present numerical results showing that this preconditioning scheme works well on problems where the underlying stiffness results from the interaction of fast transient waves with slowly-evolving dynamics, scales well to large problem sizes and numbers of processors, and allows for additional customization based on the specific problems under study.
Joseph, Ilon
2014-05-27
Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.
Preconditioning boosts regenerative programmes in the adult zebrafish heart
de Preux Charles, Anne-Sophie; Bise, Thomas; Baier, Felix; Sallin, Pauline; Jaźwińska, Anna
2016-01-01
During preconditioning, exposure to a non-lethal harmful stimulus triggers a body-wide increase of survival and pro-regenerative programmes that enable the organism to better withstand the deleterious effects of subsequent injuries. This phenomenon has first been described in the mammalian heart, where it leads to a reduction of infarct size and limits the dysfunction of the injured organ. Despite its important clinical outcome, the actual mechanisms underlying preconditioning-induced cardioprotection remain unclear. Here, we describe two independent models of cardiac preconditioning in the adult zebrafish. As noxious stimuli, we used either a thoracotomy procedure or an induction of sterile inflammation by intraperitoneal injection of immunogenic particles. Similar to mammalian preconditioning, the zebrafish heart displayed increased expression of cardioprotective genes in response to these stimuli. As zebrafish cardiomyocytes have an endogenous proliferative capacity, preconditioning further elevated the re-entry into the cell cycle in the intact heart. This enhanced cycling activity led to a long-term modification of the myocardium architecture. Importantly, the protected phenotype brought beneficial effects for heart regeneration within one week after cryoinjury, such as a more effective cell-cycle reentry, enhanced reactivation of embryonic gene expression at the injury border, and improved cell survival shortly after injury. This study reveals that exposure to antecedent stimuli induces adaptive responses that render the fish more efficient in the activation of the regenerative programmes following heart damage. Our results open a new field of research by providing the adult zebrafish as a model system to study remote cardiac preconditioning. PMID:27440423
Optimal bounds for solving tridiagonal systems with preconditioning
Zellini, P. )
1988-10-01
Let (1) Tx=f be a linear tridiagonal system system of n equations in the unknown x/sub 1/, ..., x/sub n/. It is proved that 3n-2 (nonscalar) multiplications/divisions are necessary to solve (1) in a straight-line program excluding divisions by elements of f. This bound is optimal if the cost of preconditioning of T is not counted. Analogous results are obtained in case (i) T is bidiagonal and (ii) T and f are both centrosymmetric. The existence of parallel algorithms to solve (1) with preconditioning and with minimal multiplicative redundancy is also discussed.
Incomplete block SSOR preconditionings for high order discretizations
Kolotilina, L.
1994-12-31
This paper considers the solution of linear algebraic systems Ax = b resulting from the p-version of the Finite Element Method (FEM) using PCG iterations. Contrary to the h-version, the p-version ensures the desired accuracy of a discretization not by refining an original finite element mesh but by introducing higher degree polynomials as additional basis functions which permits to reduce the size of the resulting linear system as compared with the h-version. The suggested preconditionings are the so-called Incomplete Block SSOR (IBSSOR) preconditionings.
Ischemic preconditioning for cell-based therapy and tissue engineering.
Hsiao, Sarah T; Dilley, Rodney J; Dusting, Gregory J; Lim, Shiang Y
2014-05-01
Cell- and tissue-based therapies are innovative strategies to repair and regenerate injured hearts. Despite major advances achieved in optimizing these strategies in terms of cell source and delivery method, the clinical outcome of cell-based therapy remains unsatisfactory. The non-genetic approach of ischemic/hypoxic preconditioning to enhance cell- and tissue-based therapies has received much attention in recent years due to its non-invasive drug-free application. Here we discuss the current development of hypoxic/ischemic preconditioning to enhance stem cell-based cardiac repair and regeneration.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Choice of Variables and Preconditioning for Time Dependent Problems
NASA Technical Reports Server (NTRS)
Turkel, Eli; Vatsa, Verr N.
2003-01-01
We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.
Recent Progress in Parallel Schur Complement Preconditioning for Computational Fluid
NASA Technical Reports Server (NTRS)
Barth, Tim; Kwak, Dochan (Technical Monitor)
1997-01-01
We consider preconditioning methods for nonself-adjoint advective-diffusive systems based on a nonoverlapping Schur complement procedure for arbitrary triangulated domains. The triangulation is first partitioned using the METIS multi-level $k$-way partitioning code. This partitioning of the triangulation induces a natural 2x2 partitioning of the demoralization matrix. By considering various inverse approximations of the 2x2 system we have developed a family of robust preconditioning techniques. The performance of these approximations will be discussed and numerous examples shown to illustrate the efficiency of the technique.
Multi-qubit non-adiabatic holonomic controlled quantum gates in decoherence-free subspaces
NASA Astrophysics Data System (ADS)
Hu, Shi; Cui, Wen-Xue; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou
2016-09-01
Non-adiabatic holonomic quantum gate in decoherence-free subspaces is of greatly practical importance due to its built-in fault tolerance, coherence stabilization virtues, and short run-time. Here, we propose some compact schemes to implement two- and three-qubit controlled unitary quantum gates and Fredkin gate. For the controlled unitary quantum gates, the unitary operator acting on the target qubit is an arbitrary single-qubit gate operation. The controlled quantum gates can be directly implemented by utilizing non-adiabatic holonomy in decoherence-free subspaces and the required resource for the decoherence-free subspace encoding is minimal by using only two neighboring physical qubits undergoing collective dephasing to encode a logical qubit.
A majorize-minimize strategy for subspace optimization applied to image restoration.
Chouzenoux, Emilie; Idier, Jérôme; Moussaoui, Saïd
2011-06-01
This paper proposes accelerated subspace optimization methods in the context of image restoration. Subspace optimization methods belong to the class of iterative descent algorithms for unconstrained optimization. At each iteration of such methods, a stepsize vector allowing the best combination of several search directions is computed through a multidimensional search. It is usually obtained by an inner iterative second-order method ruled by a stopping criterion that guarantees the convergence of the outer algorithm. As an alternative, we propose an original multidimensional search strategy based on the majorize-minimize principle. It leads to a closed-form stepsize formula that ensures the convergence of the subspace algorithm whatever the number of inner iterations. The practical efficiency of the proposed scheme is illustrated in the context of edge-preserving image restoration.
Multi-qubit non-adiabatic holonomic controlled quantum gates in decoherence-free subspaces
NASA Astrophysics Data System (ADS)
Hu, Shi; Cui, Wen-Xue; Guo, Qi; Wang, Hong-Fu; Zhu, Ai-Dong; Zhang, Shou
2016-06-01
Non-adiabatic holonomic quantum gate in decoherence-free subspaces is of greatly practical importance due to its built-in fault tolerance, coherence stabilization virtues, and short run-time. Here, we propose some compact schemes to implement two- and three-qubit controlled unitary quantum gates and Fredkin gate. For the controlled unitary quantum gates, the unitary operator acting on the target qubit is an arbitrary single-qubit gate operation. The controlled quantum gates can be directly implemented by utilizing non-adiabatic holonomy in decoherence-free subspaces and the required resource for the decoherence-free subspace encoding is minimal by using only two neighboring physical qubits undergoing collective dephasing to encode a logical qubit.
Estimation of direction of arrival of a moving target using subspace based approaches
NASA Astrophysics Data System (ADS)
Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2016-05-01
In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.
Learning multiview face subspaces and facial pose estimation using independent component analysis.
Li, Stan Z; Lu, XiaoGuang; Hou, Xinwen; Peng, Xianhua; Cheng, Qiansheng
2005-06-01
An independent component analysis (ICA) based approach is presented for learning view-specific subspace representations of the face object from multiview face examples. ICA, its variants, namely independent subspace analysis (ISA) and topographic independent component analysis (TICA), take into account higher order statistics needed for object view characterization. In contrast, principal component analysis (PCA), which de-correlates the second order moments, can hardly reveal good features for characterizing different views, when the training data comprises a mixture of multiview examples and the learning is done in an unsupervised way with view-unlabeled data. We demonstrate that ICA, TICA, and ISA are able to learn view-specific basis components unsupervisedly from the mixture data. We investigate results learned by ISA in an unsupervised way closely and reveal some surprising findings and thereby explain underlying reasons for the emergent formation of view subspaces. Extensive experimental results are presented.
Entropic sampling via Wang-Landau random walks in dominant energy subspaces
NASA Astrophysics Data System (ADS)
Malakis, A.; Martinos, S. S.; Hadjiagapiou, I. A.; Fytas, N. G.; Kalozoumis, P.
2005-12-01
Dominant energy subspaces of statistical systems are defined with the help of restrictive conditions on various characteristics of the energy distribution, such as the probability density and the fourth order Binder’s cumulant. Our analysis generalizes the ideas of the critical minimum energy subspace (CRMES) technique, applied previously to study the specific heat’s finite-size scaling. Here, we illustrate alternatives that are useful for the analysis of further finite-size anomalies and the behavior of the corresponding dominant subspaces is presented for the two-dimensional (2D) Baxter-Wu and the 2D and 3D Ising models. In order to show that a CRMES technique is adequate for the study of magnetic anomalies, we study and test simple methods which provide the means for an accurate determination of the energy-order-parameter (E,M) histograms via Wang-Landau random walks. The 2D Ising model is used as a test case and it is shown that high-level Wang-Landau sampling schemes yield excellent estimates for all magnetic properties. Our estimates compare very well with those of the traditional Metropolis method. The relevant dominant energy subspaces and dominant magnetization subspaces scale as expected with exponents α/ν and γ/ν , respectively. Using the Metropolis method we examine the time evolution of the corresponding dominant magnetization subspaces and we uncover the reasons behind the inadequacy of the Metropolis method to produce a reliable estimation scheme for the tail regime of the order-parameter distribution.
Boundary regularity of Nevanlinna domains and univalent functions in model subspaces
NASA Astrophysics Data System (ADS)
Baranov, Anton D.; Fedorovskiy, Konstantin Yu
2011-12-01
In the paper we study boundary regularity of Nevanlinna domains, which have appeared in problems of uniform approximation by polyanalytic polynomials. A new method for constructing Nevanlinna domains with essentially irregular nonanalytic boundaries is suggested; this method is based on finding appropriate univalent functions in model subspaces, that is, in subspaces of the form K_\\varTheta=H^2\\ominus\\varTheta H^2, where \\varTheta is an inner function. To describe the irregularity of the boundaries of the domains obtained, recent results by Dolzhenko about boundary regularity of conformal mappings are used. Bibliography: 18 titles.
Recursive encoding and decoding of the noiseless subsystem and decoherence-free subspace
Li, Chi-Kwong; Nakahara, Mikio; Poon, Yiu-Tung; Sze, Nung-Sing; Tomita, Hiroyuki
2011-10-15
When an environmental disturbance to a quantum system has a wavelength much larger than the system size, all qubits in the system are under the action of the same error operator. The noiseless subsystem and decoherence-free subspace are immune to such collective noise. We construct simple quantum circuits that implement these error-avoiding codes for a small number n of physical qubits. A single logical qubit is encoded with n=3 and 4, while two and three logical qubits are encoded with n=5 and 7, respectively. Recursive relations among subspaces employed in these codes play essential roles in our implementation.
Experimental application of decoherence-free subspaces in an optical quantum-computing algorithm.
Mohseni, M; Lundeen, J S; Resch, K J; Steinberg, A M
2003-10-31
For a practical quantum computer to operate, it is essential to properly manage decoherence. One important technique for doing this is the use of "decoherence-free subspaces" (DFSs), which have recently been demonstrated. Here we present the first use of DFSs to improve the performance of a quantum algorithm. An optical implementation of the Deutsch-Jozsa algorithm can be made insensitive to a particular class of phase noise by encoding information in the appropriate subspaces; we observe a reduction of the error rate from 35% to 7%, essentially its value in the absence of noise.
40 CFR 86.1232-96 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Methanol-Fueled Heavy-Duty Vehicles § 86.1232-96 Vehicle preconditioning. (a) Fuel tank cap(s) of gasoline... prevent entry of water or other contaminants into the fuel tank. During storage in the test area while awaiting testing, the fuel tank cap(s) may be in place. The vehicle shall be moved into the test area...
[Myocardial serotonin metabolism after local ischemia and ischemic precondition].
Naumenko, S E; Latysheva, T V; Gilinskiĭ, M A
2014-07-01
To determine the effect of ischemic preconditioning upon myocardial serotonin and 5-hydroxyindolacetic acid (5-HIAA) dynamic in myocardial ischemia and reperfusion. 28 male Wistar rats anesthetized with urethane were randomly divided into 2 groups. In the control group (n = 13) rats were subjected to 30 min coronary occlusion and subsequent 120 min reperfusion. In the ex- perimental group (n = 15) ischemic preconditioning (3 x 3 min ischemia + 3 x 3 min reperfusion) before prolonged ischemia was used. Myocardial interstitial serotonin and 5-HIAA were measured using a microdialysis technique. Myocardial serotonin and 5-HIAA significantly increased af- ter ischemic preconditioning (p = 0.00298; p = 0.00187). In prolonged ischemia interstitial serotonin level was lower in the experimental group vs. control up to 20 min of ischemia (p < 0.05). We conclude that ischemic preconditioning increases interstitial myocardial serotonin, but inhibit serotonin increase in subsequent prolonged myocardial ischemia. After 20 minutes of reperfusion the lack of correlation between serotonin and 5-HIAA levels appeared which may be the evidence of serotonin uptake activation.
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... boat. (b) The boat must be loaded with a quantity of weight that, when submerged, is equal to the...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1232-96 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... system is stabilized. The additional preconditioning shall consist of an initial one hour minimum soak... step to avoid damage to the components and the integrity of the fuel system. A replacement canister may... system. A replacement canister may be temporarily installed during the soak period while the...
Nanoparticle Pre-Conditioning for Enhanced Thermal Therapies in Cancer
Shenoi, Mithun M.; Shah, Neha B.; Griffin, Robert J.; Vercellotti, Gregory M.; Bischof, John C.
2011-01-01
Nanoparticles show tremendous promise in the safe and effective delivery of molecular adjuvants to enhance local cancer therapy. One important form of local cancer treatment that suffers from local recurrence and distant metastases is thermal therapy. Here we review a new concept involving the use of nanoparticle delivered adjuvants to “pre-condition” or alter the vascular and immunological biology of the tumor to enhance its susceptibility to thermal therapy. To this end, a number of opportunities to combine nanoparticles with vascular and immunologically active agents are reviewed. One specific example of pre-conditioning involves a gold nanoparticle tagged with a vascular targeting agent (i.e. TNF-α). This nanoparticle embodiment demonstrates pre-conditioning through a dramatic reduction in tumor blood flow and induction of vascular damage which recruits a strong and sustained inflammatory infiltrate in the tumor. The ability of this nanoparticle pre-conditioning to enhance subsequent heat or cold thermal therapy in a variety of tumor models is reviewed. Finally, the potential for future clinical imaging to judge the extent of pre-conditioning and thus the optimal timing and extent of combinatorial thermal therapy is discussed. PMID:21542691
40 CFR 1066.405 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vehicle preparation and preconditioning. 1066.405 Section 1066.405 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Preparing Vehicles and Running an Exhaust...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... than two hours, preconditioning consists of one full Urban Dynamometer Driving Cycle. Manufacturers, at... this section, the test vehicle is turned off, the vehicle cooling fan(s) is turned off, and the...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... than two hours, preconditioning consists of one full Urban Dynamometer Driving Cycle. Manufacturers, at... this section, the test vehicle is turned off, the vehicle cooling fan(s) is turned off, and the...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... than two hours, preconditioning consists of one full Urban Dynamometer Driving Cycle. Manufacturers, at... this section, the test vehicle is turned off, the vehicle cooling fan(s) is turned off, and the...
40 CFR 86.132-96 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., on a dynamometer and operated through one Urban Dynamometer Driving Schedule (UDDS), specified in... additional preconditioning allows the vehicle to adapt to the new fuel before the next test run. (A) Purge.... (G) Following the dynamometer drive, the vehicle shall be turned off for 5 minutes, then...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... than two hours, preconditioning consists of one full Urban Dynamometer Driving Cycle. Manufacturers, at... this section, the test vehicle is turned off, the vehicle cooling fan(s) is turned off, and the...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... than two hours, preconditioning consists of one full Urban Dynamometer Driving Cycle. Manufacturers, at... this section, the test vehicle is turned off, the vehicle cooling fan(s) is turned off, and the...
40 CFR 86.132-96 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., on a dynamometer and operated through one Urban Dynamometer Driving Schedule (UDDS), specified in... additional preconditioning allows the vehicle to adapt to the new fuel before the next test run. (A) Purge.... (G) Following the dynamometer drive, the vehicle shall be turned off for 5 minutes, then...
40 CFR 86.132-96 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., on a dynamometer and operated through one Urban Dynamometer Driving Schedule (UDDS), specified in... additional preconditioning allows the vehicle to adapt to the new fuel before the next test run. (A) Purge.... (G) Following the dynamometer drive, the vehicle shall be turned off for 5 minutes, then...
Subspace scheduling and parallel implementation of non-systolic regular iterative algorithms
Roychowdhury, V.P.; Kailath, T.
1989-01-01
The study of Regular Iterative Algorithms (RIAs) was introduced in a seminal paper by Karp, Miller, and Winograd in 1967. In more recent years, the study of systolic architectures has led to a renewed interest in this class of algorithms, and the class of algorithms implementable on systolic arrays (as commonly understood) has been identified as a precise subclass of RIAs include matrix pivoting algorithms and certain forms of numerically stable two-dimensional filtering algorithms. It has been shown that the so-called hyperplanar scheduling for systolic algorithms can no longer be used to schedule and implement non-systolic RIAs. Based on the analysis of a so-called computability tree we generalize the concept of hyperplanar scheduling and determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time. This subspace scheduling technique is shown to be asymptotically optimal, and formal procedures are developed for designing processor arrays that will be compatible with our scheduling schemes. Explicit formulas for the schedule of a given variable are determined whenever possible; subspace scheduling is also applied to obtain lower dimensional processor arrays for systolic algorithms.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
Robust Subspace Clustering for Multi-View Data by Exploiting Correlation Consensus.
Wang, Yang; Lin, Xuemin; Wu, Lin; Zhang, Wenjie; Zhang, Qing; Huang, Xiaodi
2015-11-01
More often than not, a multimedia data described by multiple features, such as color and shape features, can be naturally decomposed of multi-views. Since multi-views provide complementary information to each other, great endeavors have been dedicated by leveraging multiple views instead of a single view to achieve the better clustering performance. To effectively exploit data correlation consensus among multi-views, in this paper, we study subspace clustering for multi-view data while keeping individual views well encapsulated. For characterizing data correlations, we generate a similarity matrix in a way that high affinity values are assigned to data objects within the same subspace across views, while the correlations among data objects from distinct subspaces are minimized. Before generating this matrix, however, we should consider that multi-view data in practice might be corrupted by noise. The corrupted data will significantly downgrade clustering results. We first present a novel objective function coupled with an angular based regularizer. By minimizing this function, multiple sparse vectors are obtained for each data object as its multiple representations. In fact, these sparse vectors result from reaching data correlation consensus on all views. For tackling noise corruption, we present a sparsity-based approach that refines the angular-based data correlation. Using this approach, a more ideal data similarity matrix is generated for multi-view data. Spectral clustering is then applied to the similarity matrix to obtain the final subspace clustering. Extensive experiments have been conducted to validate the effectiveness of our proposed approach.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method. PMID:15593379
Locally indistinguishable subspaces spanned by three-qubit unextendible product bases
Duan Runyao; Ying Mingsheng; Xin Yu
2010-03-15
We study the local distinguishability of general multiqubit states and show that local projective measurements and classical communication are as powerful as the most general local measurements and classical communication. Remarkably, this indicates that the local distinguishability of multiqubit states can be decided efficiently. Another useful consequence is that a set of orthogonal n-qubit states is locally distinguishable only if the summation of their orthogonal Schmidt numbers is less than the total dimension 2{sup n}. Employing these results, we show that any orthonormal basis of a subspace spanned by arbitrary three-qubit orthogonal unextendible product bases (UPB) cannot be exactly distinguishable by local operations and classical communication. This not only reveals another intrinsic property of three-qubit orthogonal UPB but also provides a class of locally indistinguishable subspaces with dimension 4. We also explicitly construct locally indistinguishable subspaces with dimensions 3 and 5, respectively. Similar to the bipartite case, these results on multipartite locally indistinguishable subspaces can be used to estimate the one-shot environment-assisted classical capacity of a class of quantum broadcast channels.
Subspace based non-parametric approach for hyperspectral anomaly detection in complex scenarios
NASA Astrophysics Data System (ADS)
Matteoli, Stefania; Acito, Nicola; Diani, Marco; Corsini, Giovanni
2014-10-01
Recent studies on global anomaly detection (AD) in hyperspectral images have focused on non-parametric approaches that seem particularly suitable to detect anomalies in complex backgrounds without the need of assuming any specific model for the background distribution. Among these, AD algorithms based on the kernel density estimator (KDE) benefit from the flexibility provided by KDE, which attempts to estimate the background probability density function (PDF) regardless of its specific form. The high computational burden associated with KDE requires KDE-based AD algorithms be preceded by a suitable dimensionality reduction (DR) procedure aimed at identifying the subspace where most of the useful signal lies. In most cases, this may lead to a degradation of the detection performance due to the leakage of some anomalous target components to the subspace orthogonal to the one identified by the DR procedure. This work presents a novel subspace-based AD strategy that combines the use of KDE with a simple parametric detector performed on the orthogonal complement of the signal subspace, in order to benefit of the non-parametric nature of KDE and, at the same time, avoid the performance loss that may occur due to the DR procedure. Experimental results indicate that the proposed AD strategy is promising and deserves further investigation.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
Using spectral subspaces to improve infrared spectroscopy prediction of soil properties
NASA Astrophysics Data System (ADS)
Sila, Andrew; Shepherd, Keith D.; Pokhariyal, Ganesh P.; Towett, Erick; Weullow, Elvis; Nyambura, Mercy K.
2015-04-01
We propose a method for improving soil property predictions using local calibration models trained on datasets in spectral subspaces rather that in a global space. Previous studies have shown that local calibrations based on a subset of spectra based on spectral similarity can improve model prediction performance where there is large population variance. Searching for relevant subspaces within a spectral collection to construct local models could result in models with high power and small prediction errors, but optimal methods for selection of local samples are not clear. Using a self-organizing map method (SOM) we obtained four mid-infrared subspaces for 1,907 soil sample spectra collected from 19 different countries by the Africa Soil Information Service. Subspace means for four sub-spaces and five selected soil properties were: pH, 6.0, 6.1, 6.0, 5.6; Mehlich-3 Al, 358, 974, 614, 1032 (mg/kg); Mehlich-3 Ca, 363, 1161, 526, 4276 (mg/kg); Total Carbon, 0.4, 1.1, 0.6, 2.3 (% by weight), and Clay (%), 16.8, 46.4, 27.7, 63.3. Spectral subspaces were also obtained using a cosine similarity method to calculate the angle between the entire sample spectra space and spectra of 10 pure soil minerals. We found the sample soil spectra to be similar to four pure minerals distributed as: Halloysite (n1=214), Illite (n2=743), Montmorillonite (n3=914) and Quartz (n4=32). Cross-validated partial least square regression models were developed using two-thirds of samples spectra from each subspace for the five soil properties.We evaluated prediction performance of the models using the root mean square error of prediction (RMSEP) for a one-third-holdout set. Local models significantly improved prediction performance compared with the global model. The SOM method reduced RMESP for total carbon by 10 % (global RMSEP = 0.41) Mehlich-3 Ca by 17% (global RMESP = 1880), Mehlich-3 Al by 21 % (global RMSEP = 206), and clay content by 6 % (global RMSEP = 13.6), but not for pH. Individual SOM
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite
On the convergence of (ensemble) Kalman filters and smoothers onto the unstable subspace
NASA Astrophysics Data System (ADS)
Bocquet, Marc
2016-04-01
The characteristics of the model dynamics are critical in the performance of (ensemble) Kalman filters and smoothers. In particular, as emphasised in the seminal work of Anna Trevisan and co-authors, the error covariance matrix is asymptotically supported by the unstable and neutral subspace only, i.e. it is span by the backward Lyapunov vectors with non-negative exponents. This behaviour is at the heart of algorithms known as Assimilation in the Unstable Subspace, although its formal proof was still missing. This convergence property, its analytic proof, meaning and implications for the design of efficient reduced-order data assimilation algorithms are the topics of this talk. The structure of the talk is as follows. Firstly, we provide the analytic proof of the convergence on the unstable and neutral subspace in the linear dynamics and linear observation operator case, along with rigorous results giving the rate of such convergence. The derivation is based on an expression that relates explicitly the covariance matrix at an arbitrary time with the initial error covariance. Numerical results are also shown to illustrate and support the mathematical claims. Secondly, we discuss how this neat picture is modified when the dynamics become nonlinear and chaotic and when it is not possible to derive analytic formulas. In this case an ensemble Kalman filter (EnKF) is used and the connection between the convergence properties on the unstable-neutral subspace and the EnKF covariance inflation is discussed. We also explain why, in the perfect model setting, the iterative ensemble Kalman smoother (IEnKS), as an efficient filtering and smoothing technique, has an error covariance matrix whose projection is more focused on the unstable-neutral subspace than that of the EnKF. This contribution results from collaborations with A. Carrassi, K. S. Gurumoorthy, A. Apte, C. Grudzien, and C. K. R. T. Jones.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
Improved Detection of Local Earthquakes in the Vienna Basin (Austria), using Subspace Detectors
NASA Astrophysics Data System (ADS)
Apoloner, Maria-Theresia; Caffagni, Enrico; Bokelmann, Götz
2016-04-01
The Vienna Basin in Eastern Austria is densely populated and highly-developed; it is also a region of low to moderate seismicity, yet the seismological network coverage is relatively sparse. This demands improving our capability of earthquake detection by testing new methods, enlarging the existing local earthquake catalogue. This contributes to imaging tectonic fault zones for better understanding seismic hazard, also through improved earthquake statistics (b-value, magnitude of completeness). Detection of low-magnitude earthquakes or events for which the highest amplitudes slightly exceed the signal-to-noise-ratio (SNR), may be possible by using standard methods like the short-term over long-term average (STA/LTA). However, due to sparse network coverage and high background noise, such a technique may not detect all potentially recoverable events. Yet, earthquakes originating from the same source region and relatively close to each other, should be characterized by similarity in seismic waveforms, at a given station. Therefore, waveform similarity can be exploited by using specific techniques such as correlation-template based (also known as matched filtering) or subspace detection methods (based on the subspace theory). Matching techniques basically require a reference or template event, usually characterized by high waveform coherence in the array receivers, and high SNR, which is cross-correlated with the continuous data. Instead, subspace detection methods overcome in principle the necessity of defining template events as single events, but use a subspace extracted from multiple events. This approach theoretically should be more robust in detecting signals that exhibit a strong variability (e.g. because of source or magnitude). In this study we scan the continuous data recorded in the Vienna Basin with a subspace detector to identify additional events. This will allow us to estimate the increase of the seismicity rate in the local earthquake catalogue
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
2015-01-01
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Reconstructing Clusters for Preconditioned Short-term Load Forecasting
NASA Astrophysics Data System (ADS)
Itagaki, Tadahiro; Mori, Hiroyuki
This paper presents a new preconditioned method for short-term load forecasting that focuses on more accurate predicted value. In recent years, the deregulated and competitive power market increases the degree of uncertainty. As a result, more sophisticated short-term load forecasting techniques are required to deal with more complicated load behavior. To alleviate the complexity of load behavior, this paper presents a new preconditioned model. In this paper, clustering results are reconstructed to equalize the number of learning data after clustering with the Kohonen-based neural network. That enhances a short-term load forecasting model at each reconstructed cluster. The proposed method is successfully applied to real data of one-step ahead daily maximum load forecasting.
Investigation of Reperfusion Injury and Ischemic Preconditioning in Microsurgry
Wang, Wei Zhong
2008-01-01
Ischemia/reperfusion (I/R) is inevitable in many vascular and musculoskeletal traumas, diseases, free tissue transfers, and during time-consuming reconstructive surgeries in the extremities. Salvage of a prolonged ischemic extremity or flap still remains a challenge for the microvascular surgeon. One of the common complications after microsurgery is I/R-induced tissue death or I/R injury. Twenty years after the discovery, ischemic preconditioning (IPC) has emerged as a powerful method for attenuating I/R injury in a variety of organs or tissues. However, its therapeutic expectations still need to be fulfilled. In this article, the author reviews some important experimental evidences of I/R injury as well as preconditioning-induced protection in the fields relevant to microsurgery. PMID:18946882
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; Dinner, Aaron R.
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum undermore » DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.« less
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; Dinner, Aaron R.
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.
Chemogenetic silencing of neurons in retrosplenial cortex disrupts sensory preconditioning.
Robinson, Siobhan; Todd, Travis P; Pasternak, Anna R; Luikart, Bryan W; Skelton, Patrick D; Urban, Daniel J; Bucci, David J
2014-08-13
An essential aspect of episodic memory is the formation of associations between neutral sensory cues in the environment. In light of recent evidence that this critical aspect of learning does not require the hippocampus, we tested the involvement of the retrosplenial cortex (RSC) in this process using a chemogenetic approach that allowed us to temporarily silence neurons along the entire rostrocaudal extent of the RSC. A viral vector containing the gene for a synthetic inhibitory G-protein-coupled receptor (hM4Di) was infused into RSC. When the receptor was later activated by systemic injection of clozapine-N-oxide, neural activity in RSC was transiently silenced (confirmed using a patch-clamp procedure). Rats expressing hM4Di and control rats were trained in a sensory preconditioning procedure in which a tone and light were paired on some trials and a white noise stimulus was presented alone on the other trials during the Preconditioning phase. Thus, rats were given the opportunity to form an association between a tone and a light in the absence of reinforcement. Later, the light was paired with food. During the test phase when the auditory cues were presented alone, controls exhibited more conditioned responding during presentation of the tone compared with the white noise reflecting the prior formation of a tone-light association. Silencing RSC neurons during the Preconditioning phase prevented the formation of an association between the tone and light and eliminated the sensory preconditioning effect. These findings indicate that RSC may contribute to episodic memory formation by linking essential sensory stimuli during learning.
Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)
1998-01-01
This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.
Object-oriented design of preconditioned iterative methods
Bruaset, A.M.
1994-12-31
In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
NASA Astrophysics Data System (ADS)
Lemieux, Jean-François; Price, Stephen F.; Evans, Katherine J.; Knoll, Dana; Salinger, Andrew G.; Holland, David M.; Payne, Antony J.
2011-07-01
We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the first-order ice sheet momentum equation in order to improve the numerical performance of the Glimmer-Community Ice Sheet Model (Glimmer-CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on significant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in Glimmer-CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.8-3.6 times more efficient than the standard Picard solver in Glimmer-CISM. Importantly, this computational gain of JFNK over the Picard solver increases when refining the grid. Global convergence of the JFNK solver has been significantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
Financial preconditions for successful community initiatives for the uninsured.
Song, Paula H; Smith, Dean G
2007-01-01
Community-based initiatives are increasingly being implemented as a strategy to address the health needs of the community, with a growing body of evidence on successes of various initiatives. This study addresses financial status indicators (preconditions) that might predict where community-based initiatives might have a better chance for success. We evaluated five community-based initiatives funded by the Communities in Charge (CIC) program sponsored by the Robert Wood Johnson Foundation. These initiatives focus on increasing access by easing financial barriers to care for the uninsured. At each site, we collected information on financial status indicators and interviewed key personnel from health services delivery and financing organizations. With full acknowledgment of the caveats associated with generalizations based on a small number of observations, we suggest four financial preconditions associated with successful initiation of CIC programs: (1) uncompensated care levels that negatively affect profitability, (2) reasonable financial stability of providers, (3) stable health insurance market, and (4) the potential to create new sources of funding. In general, sites that demonstrate successful program initiation are financially stressed enough by uncompensated care to gain the attention of local healthcare providers. However, they are not so strained and so concerned about revenue sources that they cannot afford to participate in the initiative. In addition to political and managerial indicators, we suggest that planning for community-based initiatives should include financial indicators of current health services delivery and financing organizations and consideration of whether they meet preconditions for success.
Cardioprotection acquired through exercise: the role of ischemic preconditioning.
Marongiu, Elisabetta; Crisafulli, Antonio
2014-11-01
A great bulk of evidence supports the concept that regular exercise training can reduce the incidence of coronary events and increase survival chances after myocardial infarction. These exercise-induced beneficial effects on the myocardium are reached by means of the reduction of several risk factors relating to cardiovascular disease, such as high cholesterol, hypertension, obesity etc. Furthermore, it has been demonstrated that exercise can reproduce the "ischemic preconditioning" (IP), which refers to the capacity of short periods of ischemia to render the myocardium more resistant to subsequent ischemic insult and to limit infarct size during prolonged ischemia. However, IP is a complex phenomenon which, along with infarct size reduction, can also provide protection against arrhythmia and myocardial stunning due to ischemia-reperfusion. Several clues demonstrate that preconditioning may be directly induced by exercise, thus inducing a protective phenotype at the heart level without the necessity of causing ischemia. Exercise appears to act as a physiological stress that induces beneficial myocardial adaptive responses at cellular level. The purpose of the present paper is to review the latest data on the role played by exercise in triggering myocardial preconditioning.
Fan, Ran; Yu, Tao; Lin, Jia-Li; Ren, Guang-Dong; Li, Yi; Liao, Xiao-Xing; Huang, Zi-Tong; Jiang, Chong-Hui
2016-10-01
In this study, we investigated the effects of remote ischemic preconditioning on post resuscitation cerebral function in a rat model of cardiac arrest and resuscitation. The animals were randomized into six groups: 1) sham operation, 2) lateral ventricle injection and sham operation, 3) cardiac arrest induced by ventricular fibrillation, 4) lateral ventricle injection and cardiac arrest, 5) remote ischemic preconditioning initiated 90min before induction of ventricular fibrillation, and 6) lateral ventricle injection and remote ischemic preconditioning before cardiac arrest. Reagent of Lateral ventricle injection is neuroglobin antisense oligodeoxynucleotides which initiated 24h before sham operation, cardiac arrest or remote ischemic preconditioning. Remote ischemic preconditioning was induced by four cycles of 5min of limb ischemia, followed by 5min of reperfusion. Ventricular fibrillation was induced by current and lasted for 6min. Defibrillation was attempted after 6min of cardiopulmonary resuscitation. The animals were then monitored for 2h and observed for an additionally maximum 70h. Post resuscitation cerebral function was evaluated by neurologic deficit score at 72h after return of spontaneous circulation. Results showed that remote ischemic preconditioning increased neurologic deficit scores. To investigate the neuroprotective effects of remote ischemic preconditioning, we observed neuronal injury at 48 and 72h after return of spontaneous circulation and found that remote ischemic preconditioning significantly decreased the occurrence of neuronal apoptosis and necrosis. To further comprehend mechanism of neuroprotection induced by remote ischemic preconditioning, we found expression of neuroglobin at 24h after return of spontaneous circulation was enhanced. Furthermore, administration of neuroglobin antisense oligodeoxynucleotides before induction of remote ischemic preconditioning showed that the level of neuroglobin was decreased then partly abrogated
Universal holonomic quantum gates in decoherence-free subspace on superconducting circuits
NASA Astrophysics Data System (ADS)
Xue, Zheng-Yuan; Zhou, Jian; Wang, Z. D.
2015-08-01
To implement a set of universal quantum logic gates based on non-Abelian geometric phases, it is conventional wisdom that quantum systems beyond two levels are required, which is extremely difficult to fulfill for superconducting qubits and appears to be a main reason why only single-qubit gates were implemented in a recent experiment [A. A. Abdumalikov, Jr. et al., Nature (London) 496, 482 (2013), 10.1038/nature12010]. Here we propose to realize nonadiabatic holonomic quantum computation in decoherence-free subspace on circuit QED, where one can use only the two levels in transmon qubits, a usual interaction, and a minimal resource for the decoherence-free subspace encoding. In particular, our scheme not only overcomes the difficulties encountered in previous studies but also can still achieve considerably large effective coupling strength, such that high-fidelity quantum gates can be achieved. Therefore, the present scheme makes realizing robust holonomic quantum computation with superconducting circuits very promising.
Location of essential spectrum of intermediate Hamiltonians restricted to symmetry subspaces
NASA Astrophysics Data System (ADS)
Beattie, Christopher; Ruskai, Mary Beth
1988-10-01
A theorem is presented on the location of the essential spectrum of certain intermediate Hamiltonians used to construct lower bounds to bound-state energies of multiparticle atomic and molecular systems. This result is an analog of the Hunziker-Van Winter-Zhislin theorem for exact Hamiltonians, which implies that the continuum of an N-electron system begins at the ground-state energy for the corresponding system with N-1 electrons. The work presented here strengthens earlier results of Beattie [SIAM J. Math. Anal. 16, 492 (1985)] in that one may now consider Hamiltonians restricted to the symmetry subspaces appropriate to the permutational symmetry required by the Pauli exclusion principle, or to other physically relevant symmetry subspaces. The associated convergence theory is also given, guaranteeing that all bound-state energies can be approximated from below with arbitrary accuracy.
Experimental creation of quantum Zeno subspaces by repeated multi-spin projections in diamond
NASA Astrophysics Data System (ADS)
Kalb, N.; Cramer, J.; Twitchen, D. J.; Markham, M.; Hanson, R.; Taminiau, T. H.
2016-10-01
Repeated observations inhibit the coherent evolution of quantum states through the quantum Zeno effect. In multi-qubit systems this effect provides opportunities to control complex quantum states. Here, we experimentally demonstrate that repeatedly projecting joint observables of multiple spins creates quantum Zeno subspaces and simultaneously suppresses the dephasing caused by a quasi-static environment. We encode up to two logical qubits in these subspaces and show that the enhancement of the dephasing time with increasing number of projections follows a scaling law that is independent of the number of spins involved. These results provide experimental insight into the interplay between frequent multi-spin measurements and slowly varying noise and pave the way for tailoring the dynamics of multi-qubit systems through repeated projections.
Experimental creation of quantum Zeno subspaces by repeated multi-spin projections in diamond
Kalb, N.; Cramer, J.; Twitchen, D. J.; Markham, M.; Hanson, R.; Taminiau, T. H.
2016-01-01
Repeated observations inhibit the coherent evolution of quantum states through the quantum Zeno effect. In multi-qubit systems this effect provides opportunities to control complex quantum states. Here, we experimentally demonstrate that repeatedly projecting joint observables of multiple spins creates quantum Zeno subspaces and simultaneously suppresses the dephasing caused by a quasi-static environment. We encode up to two logical qubits in these subspaces and show that the enhancement of the dephasing time with increasing number of projections follows a scaling law that is independent of the number of spins involved. These results provide experimental insight into the interplay between frequent multi-spin measurements and slowly varying noise and pave the way for tailoring the dynamics of multi-qubit systems through repeated projections. PMID:27713397
A sub-space greedy search method for efficient Bayesian Network inference.
Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing
2011-09-01
Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology.
Hyperspectral Image Kernel Sparse Subspace Clustering with Spatial Max Pooling Operation
NASA Astrophysics Data System (ADS)
Zhang, Hongyan; Zhai, Han; Liao, Wenzhi; Cao, Liqin; Zhang, Liangpei; Pižurica, Aleksandra
2016-06-01
In this paper, we present a kernel sparse subspace clustering with spatial max pooling operation (KSSC-SMP) algorithm for hyperspectral remote sensing imagery. Firstly, the feature points are mapped from the original space into a higher dimensional space with a kernel strategy. In particular, the sparse subspace clustering (SSC) model is extended to nonlinear manifolds, which can better explore the complex nonlinear structure of hyperspectral images (HSIs) and obtain a much more accurate representation coefficient matrix. Secondly, through the spatial max pooling operation, the spatial contextual information is integrated to obtain a smoother clustering result. Through experiments, it is verified that the KSSC-SMP algorithm is a competitive clustering method for HSIs and outperforms the state-of-the-art clustering methods.
[Preconditioning impact on coronary perfusion during ischemia and reperfusion of heart].
Maslov, L N; Lishmanov, Iu B; Oeltgen, P; Peĭ, J-M; Krylatov, A V; Barzakh, E I; Portnichenko, A G; Meshoulam, R
2012-04-01
Recent studies have confirmed that ischemic preconditioning prevents appearance of reperfusion endothelial dysfunction. However, the issue of preconditioning impact on no-reflow phenomenon remains unresolved. The receptor mechanisms involved in the cardioprotective and vasoprotective effects of preconditioning are different. The ability of preconditioning in preventing reperfusion endothelial dysfunction is dependent upon bradykinin B2-receptor activation and not dependent upon adenosine receptor stimulation. The vasoprotective effect of preconditioning is mediated via mechanisms relying in part on activation of protein kinase C, NO-synthase, cyclooxygenase, mitochondrial K(ATP)-channel opening and an enhancement of antioxidative protection of the heart. The delayed preconditioning also exerts endothelium-protective effect. Peroxynitrite, NO* and O2* are the triggers of this effect but a possible end-effector involves endothelial NO-synthase. PMID:22834333
NASA Astrophysics Data System (ADS)
Jefferson, Jennifer L.; Gilbert, James M.; Constantine, Paul G.; Maxwell, Reed M.
2016-05-01
Integrated hydrologic models coupled to land surface models require several input parameters to characterize the land surface and to estimate energy fluxes. Uncertainty of input parameter values is inherent in any model and the sensitivity of output to these uncertain parameters becomes an important consideration. To better understand these connections in the context of hydrologic models, we use the ParFlow-Common Land Model (PF-CLM) to estimate energy fluxes given variations in 19 vegetation and land surface parameters over a 144-hour period of time. Latent, sensible and ground heat fluxes from bare soil and grass vegetation were estimated using single column and tilted-v domains. Energy flux outputs, along with the corresponding input parameters, from each of the four scenario simulations were evaluated using active subspaces. The active subspace method considers parameter sensitivity by quantifying a weight for each parameter. The method also evaluates the potential for dimension reduction by identifying the input-output relationship through the active variable - a linear combination of input parameters. The aerodynamic roughness length was the most important parameter for bare soil energy fluxes. Multiple parameters were important for energy fluxes from vegetated surfaces and depended on the type of energy flux. Relationships between land surface inputs and output fluxes varied between latent, sensible and ground heat, but were consistent between domain setup (i.e., with or without lateral flow) and vegetation type. A quadratic polynomial was used to describe the input-output relationship for these energy fluxes. The reduced-dimension model of land surface dynamics can be compared to observations or used to solve the inverse problem. Considering this work as a proof-of-concept, the active subspace method can be applied and extended to a range of domain setups, land cover types and time periods to obtain a reduced-form representation of any output of interest
Transfer and teleportation of quantum states encoded in decoherence-free subspace
Wei Hua; Deng Zhijao; Zhang Xiaolong; Feng Mang
2007-11-15
Quantum state transfer and teleportation, with qubits encoded in internal states of atoms in cavities, among spatially separated nodes of a quantum network in a decoherence-free subspace are proposed, based on a cavity-assisted interaction with single-photon pulses. We show in detail the implementation of a logic-qubit Hadamard gate and a two-logic-qubit conditional gate, and discuss the experimental feasibility of our scheme.
Subspace based adaptive denoising of surface EMG from neurological injury patients
NASA Astrophysics Data System (ADS)
Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping
2014-10-01
Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.
NASA Astrophysics Data System (ADS)
Zhou, Tao; Xie, Kai; Zhang, Junhao; Yang, Jie; He, Xiangjian
2015-05-01
It is a challenging task to develop an effective and robust object tracking method due to factors such as severe occlusion, background clutters, abrupt motion, illumination variation, and so on. A tracking algorithm based on weighted subspace reconstruction error is proposed. The discriminative weights are defined based on minimizing reconstruction error with a positive dictionary while maximizing reconstruction error with a negative dictionary. Then a confidence map for candidates is computed through the subspace reconstruction error. Finally, the location of the target object is estimated by maximizing the decision map which combines the discriminative weights and subspace reconstruction error. Furthermore, the new evaluation method based on a forward-backward tracking criterion is used to verify the proposed method and demonstrates its robustness in the updating stage and its effectiveness in the reduction of accumulated errors. Experimental results on 12 challenging video sequences show that the proposed algorithm performs favorably against 12 state-of-the-art methods in terms of accuracy and robustness.
Dong, Daoyi; Chen, Chunlin; Tarn, Tzyh-Jong; Pechen, Alexander; Rabitz, Herschel
2008-08-01
In this paper, an incoherent control scheme for accomplishing the state control of a class of quantum systems which have wavefunction-controllable subspaces is proposed. This scheme includes the following two steps: projective measurement on the initial state and learning control in the wavefunction-controllable subspace. The first step probabilistically projects the initial state into the wavefunction-controllable subspace. The probability of success is sensitive to the initial state; however, it can be greatly improved through multiple experiments on several identical initial states even in the case with a small probability of success for an individual measurement. The second step finds a local optimal control sequence via quantum reinforcement learning and drives the controlled system to the objective state through a set of suitable controls. In this strategy, the initial states can be unknown identical states, the quantum measurement is used as an effective control, and the controlled system is not necessarily unitarily controllable. This incoherent control scheme provides an alternative quantum engineering strategy for locally controllable quantum systems.
N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering
Ullah, Farman; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
N-screen aware multicriteria hybrid recommender system using weight based subspace clustering.
Ullah, Farman; Sarwar, Ghulam; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements.
Identifying Subspace Gene Clusters from Microarray Data Using Low-Rank Representation
Cui, Yan; Zheng, Chun-Hou; Yang, Jian
2013-01-01
Identifying subspace gene clusters from the gene expression data is useful for discovering novel functional gene interactions. In this paper, we propose to use low-rank representation (LRR) to identify the subspace gene clusters from microarray data. LRR seeks the lowest-rank representation among all the candidates that can represent the genes as linear combinations of the bases in the dataset. The clusters can be extracted based on the block diagonal representation matrix obtained using LRR, and they can well capture the intrinsic patterns of genes with similar functions. Meanwhile, the parameter of LRR can balance the effect of noise so that the method is capable of extracting useful information from the data with high level of background noise. Compared with traditional methods, our approach can identify genes with similar functions yet without similar expression profiles. Also, it could assign one gene into different clusters. Moreover, our method is robust to the noise and can identify more biologically relevant gene clusters. When applied to three public datasets, the results show that the LRR based method is superior to existing methods for identifying subspace gene clusters. PMID:23527177
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Gosselet, Pierre
2016-09-01
The Variational Theory of Complex Rays (VTCR) is an indirect Trefftz method designed to study systems governed by Helmholtz-like equations. It uses wave functions to represent the solution inside elements, which reduces the dispersion error compared to classical polynomial approaches but the resulting system is prone to be ill conditioned. This paper gives a simple and original presentation of the VTCR using the discontinuous Galerkin framework and it traces back the ill-conditioning to the accumulation of eigenvalues near zero for the formulation written in terms of wave amplitude. The core of this paper presents an efficient solving strategy that overcomes this issue. The key element is the construction of a search subspace where the condition number is controlled at the cost of a limited decrease of attainable precision. An augmented LSQR solver is then proposed to solve efficiently and accurately the complete system. The approach is successfully applied to different examples.
Preconditioned iterative methods for space-time fractional advection-diffusion equations
NASA Astrophysics Data System (ADS)
Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.
2016-08-01
In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.
Pyruvate-dependent preconditioning and cardioprotection in murine myocardium.
Flood, Amanda; Hack, Benjamin D; Headrick, John P
2003-03-01
1. Whether pyruvate inhibits or can actually initiate myocardial preconditioning is unclear and whether pyruvate provides protection via its action as a 'cosubstrate' with glucose or via alternative mechanisms also remains controversial. We examined effects of a high concentration of pyruvate (10 mmol/L) alone or with 15 mmol/L glucose in mouse hearts subjected to 20 min ischaemia and 30 min reperfusion. 2. Provision of 10 mmol/L pyruvate alone or as a cosubstrate markedly reduced ischaemic contracture and enhanced postischaemic recovery. Time to contracture was increased from approximately 3 min to over 8 min, peak contracture was reduced from 90 mmHg to less than 60 mmHg and postischaemic pressure development was also improved. Effects on contracture were independent of the presence of pyruvate during ischaemia and improved postischaemic recovery was evident with pre-ischaemic pyruvate perfusion. 3. Cardioprotection did not require the presence of pyruvate during ischaemia or reperfusion and effects of pyruvate pretreatment could be mimicked by pretreatment with 1 mmol/L dichloroacetate (DCA), an activator of pyruvate dehydrogenase. 4. Myocardial adenosine efflux and Ca2+ content were elevated (by 215 and 65%, respectively) following pretreatment with pyruvate, potentially triggering a preconditioned state. A role for adenosine A1 receptors is supported by lack of added protection with pyruvate in hearts transgenically overexpressing adenosine A1 receptors. 5. Collectively, these observations demonstrate that pre-ischaemic treatment with pyruvate or DCA provides a beneficial preconditioning-like effect in ischaemic and postischaemic myocardium. The response appears unrelated to glycolytic inhibition, but may be mediated via transient changes in adenosine levels and/or cellular Ca2+.
Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.
2013-01-01
This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Clift, Simon S.; Tang, Wei-Pai
1994-01-01
We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.
Ischemic preconditioning enhances integrity of coronary endothelial tight junctions
Li, Zhao; Jin, Zhu-Qiu
2012-08-31
Highlights: Black-Right-Pointing-Pointer Cardiac tight junctions are present between coronary endothelial cells. Black-Right-Pointing-Pointer Ischemic preconditioning preserves the structural and functional integrity of tight junctions. Black-Right-Pointing-Pointer Myocardial edema is prevented in hearts subjected to ischemic preconditioning. Black-Right-Pointing-Pointer Ischemic preconditioning enhances translocation of ZO-2 from cytosol to cytoskeleton. -- Abstract: Ischemic preconditioning (IPC) is one of the most effective procedures known to protect hearts against ischemia/reperfusion (IR) injury. Tight junction (TJ) barriers occur between coronary endothelial cells. TJs provide barrier function to maintain the homeostasis of the inner environment of tissues. However, the effect of IPC on the structure and function of cardiac TJs remains unknown. We tested the hypothesis that myocardial IR injury ruptures the structure of TJs and impairs endothelial permeability whereas IPC preserves the structural and functional integrity of TJs in the blood-heart barrier. Langendorff hearts from C57BL/6J mice were prepared and perfused with Krebs-Henseleit buffer. Cardiac function, creatine kinase release, and myocardial edema were measured. Cardiac TJ function was evaluated by measuring Evans blue-conjugated albumin (EBA) content in the extravascular compartment of hearts. Expression and translocation of zonula occludens (ZO)-2 in IR and IPC hearts were detected with Western blot. A subset of hearts was processed for the observation of ultra-structure of cardiac TJs with transmission electron microscopy. There were clear TJs between coronary endothelial cells of mouse hearts. IR caused the collapse of TJs whereas IPC sustained the structure of TJs. IR increased extravascular EBA content in the heart and myocardial edema but decreased the expression of ZO-2 in the cytoskeleton. IPC maintained the structure of TJs. Cardiac EBA content and edema were reduced in IPC hearts. IPC
Incomplete block factorization preconditioning for indefinite elliptic problems
Guo, Chun-Hua
1996-12-31
The application of the finite difference method to approximate the solution of an indefinite elliptic problem produces a linear system whose coefficient matrix is block tridiagonal and symmetric indefinite. Such a linear system can be solved efficiently by a conjugate residual method, particularly when combined with a good preconditioner. We show that specific incomplete block factorization exists for the indefinite matrix if the mesh size is reasonably small. And this factorization can serve as an efficient preconditioner. Some efforts are made to estimate the eigenvalues of the preconditioned matrix. Numerical results are also given.
Chiueh, C.C. . E-mail: chiueh@tmu.edu.tw; Andoh, Tsugunobu; Chock, P. Boon
2005-09-01
Hormesis, a stress tolerance, can be induced by ischemic preconditioning stress. In addition to preconditioning, it may be induced by other means, such as gas anesthetics. Preconditioning mechanisms, which may be mediated by reprogramming survival genes and proteins, are obscure. A known neurotoxicant, 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), causes less neurotoxicity in the mice that are preconditioned. Pharmacological evidences suggest that the signaling pathway of {center_dot}NO-cGMP-PKG (protein kinase G) may mediate preconditioning phenomenon. We developed a human SH-SY5Y cell model for investigating {sup {center_dot}}NO-mediated signaling pathway, gene regulation, and protein expression following a sublethal preconditioning stress caused by a brief 2-h serum deprivation. Preconditioned human SH-SY5Y cells are more resistant against severe oxidative stress and apoptosis caused by lethal serum deprivation and 1-mehtyl-4-phenylpyridinium (MPP{sup +}). Both sublethal and lethal oxidative stress caused by serum withdrawal increased neuronal nitric oxide synthase (nNOS/NOS1) expression and {sup {center_dot}}NO levels to a similar extent. In addition to free radical scavengers, inhibition of nNOS, guanylyl cyclase, and PKG blocks hormesis induced by preconditioning. S-nitrosothiols and 6-Br-cGMP produce a cytoprotection mimicking the action of preconditioning tolerance. There are two distinct cGMP-mediated survival pathways: (i) the up-regulation of a redox protein thioredoxin (Trx) for elevating mitochondrial levels of antioxidant protein Mn superoxide dismutase (MnSOD) and antiapoptotic protein Bcl-2, and (ii) the activation of mitochondrial ATP-sensitive potassium channels [K(ATP)]. Preconditioning induction of Trx increased tolerance against MPP{sup +}, which was blocked by Trx mRNA antisense oligonucleotide and Trx reductase inhibitor. It is concluded that Trx plays a pivotal role in {sup {center_dot}}NO-dependent preconditioning hormesis against
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Cardioprotection Acquired Through Exercise: The Role of Ischemic Preconditioning
Marongiu, Elisabetta; Crisafulli, Antonio
2014-01-01
A great bulk of evidence supports the concept that regular exercise training can reduce the incidence of coronary events and increase survival chances after myocardial infarction. These exercise-induced beneficial effects on the myocardium are reached by means of the reduction of several risk factors relating to cardiovascular disease, such as high cholesterol, hypertension, obesity etc. Furthermore, it has been demonstrated that exercise can reproduce the “ischemic preconditioning” (IP), which refers to the capacity of short periods of ischemia to render the myocardium more resistant to subsequent ischemic insult and to limit infarct size during prolonged ischemia. However, IP is a complex phenomenon which, along with infarct size reduction, can also provide protection against arrhythmia and myocardial stunning due to ischemia-reperfusion. Several clues demonstrate that preconditioning may be directly induced by exercise, thus inducing a protective phenotype at the heart level without the necessity of causing ischemia. Exercise appears to act as a physiological stress that induces beneficial myocardial adaptive responses at cellular level. The purpose of the present paper is to review the latest data on the role played by exercise in triggering myocardial preconditioning. PMID:24720421
Sevoflurane Preconditioning Confers Neuroprotection via Anti-apoptosis Effects.
Wang, Hailian; Shi, Hong; Yu, Qiong; Chen, Jun; Zhang, Feng; Gao, Yanqin
2016-01-01
Neuroprotection against cerebral ischemia afforded by volatile anesthetic preconditioning (APC) has been demonstrated both in vivo and in vitro, yet the underlying mechanism is poorly understood. We previously reported that repeated sevoflurane APC reduced infarct size in rats after focal ischemia. In this study, we investigated whether inhibition of apoptotic signaling cascades contributes to sevoflurane APC-induced neuroprotection. Male Sprague-Dawley rats were exposed to ambient air or 2.4 % sevoflurane for 30 min per day for 4 consecutive days and then subjected to occlusion of the middle cerebral artery (MCAO) for 60 min at 24 h after the last sevoflurane intervention. APC with sevoflurane markedly decreased apoptotic cell death in rat brains, which was accompanied by decreased caspase-3 cleavage and cytochrome c release. The apoptotic suppression was associated with increased ratios of anti-apoptotic Bcl-2 family proteins over pro-apoptotic proteins and with decreased activation of JNK and p53 pathways. Thus, our data suggest that suppression of apoptotic cell death contributes to the neuroprotection against ischemic brain injury conferred by sevoflurane preconditioning. PMID:26463923
Parallelizable approximate solvers for recursions arising in preconditioning
Shapira, Y.
1996-12-31
For the recursions used in the Modified Incomplete LU (MILU) preconditioner, namely, the incomplete decomposition, forward elimination and back substitution processes, a parallelizable approximate solver is presented. The present analysis shows that the solutions of the recursions depend only weakly on their initial conditions and may be interpreted to indicate that the inexact solution is close, in some sense, to the exact one. The method is based on a domain decomposition approach, suitable for parallel implementations with message passing architectures. It requires a fixed number of communication steps per preconditioned iteration, independently of the number of subdomains or the size of the problem. The overlapping subdomains are either cubes (suitable for mesh-connected arrays of processors) or constructed by the data-flow rule of the recursions (suitable for line-connected arrays with possibly SIMD or vector processors). Numerical examples show that, in both cases, the overhead in the number of iterations required for convergence of the preconditioned iteration is small relatively to the speed-up gained.
Preconditioning 2D Integer Data for Fast Convex Hull Computations
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Stress Preconditioning of Spreading Depression in the Locust CNS
Rodgers, Corinne I.; Armstrong, Gary A. B.; Shoemaker, Kelly L.; LaBrie, John D.; Moyes, Christopher D.; Robertson, R. Meldrum
2007-01-01
Cortical spreading depression (CSD) is closely associated with important pathologies including stroke, seizures and migraine. The mechanisms underlying SD in its various forms are still incompletely understood. Here we describe SD-like events in an invertebrate model, the ventilatory central pattern generator (CPG) of locusts. Using K+ -sensitive microelectrodes, we measured extracellular K+ concentration ([K+]o) in the metathoracic neuropile of the CPG while monitoring CPG output electromyographically from muscle 161 in the second abdominal segment to investigate the role K+ in failure of neural circuit operation induced by various stressors. Failure of ventilation in response to different stressors (hyperthermia, anoxia, ATP depletion, Na+/K+ ATPase impairment, K+ injection) was associated with a disturbance of CNS ion homeostasis that shares the characteristics of CSD and SD-like events in vertebrates. Hyperthermic failure was preconditioned by prior heat shock (3 h, 45°C) and induced-thermotolerance was associated with an increase in the rate of clearance of extracellular K+ that was not linked to changes in ATP levels or total Na+/K+ ATPase activity. Our findings suggest that SD-like events in locusts are adaptive to terminate neural network operation and conserve energy during stress and that they can be preconditioned by experience. We propose that they share mechanisms with CSD in mammals suggesting a common evolutionary origin. PMID:18159249
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Preconditioned two speed idle test-EPA... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General requirements—(1...-speed mode followed immediately by a first-chance idle mode. (ii) The second-chance test as...
Sensory Preconditioning in Newborn Rabbits: From Common to Distinct Odor Memories
ERIC Educational Resources Information Center
Coureaud, Gerard; Tourat, Audrey; Ferreira, Guillaume
2013-01-01
This study evaluated whether olfactory preconditioning is functional in newborn rabbits and based on joined or independent memory of odorants. First, after exposure to odorants A+B, the conditioning of A led to high responsiveness to odorant B. Second, responsiveness to B persisted after amnesia of A. Third, preconditioning was also functional…
NASA Astrophysics Data System (ADS)
Muthuvalu, Mohana Sundaram
2016-06-01
In this paper, performance analysis of the preconditioned Gauss-Seidel iterative methods for solving dense linear system arise from Fredholm integral equations of the second kind is investigated. The formulation and implementation of the preconditioned Gauss-Seidel methods are presented. Numerical results are included in order to verify the performance of the methods.
Lyamina, N P; Karpova, E S; Kotel'nikova, E V; Bizyaeva, E A
2015-01-01
The comprehensive analysis of efficiency of different variants of preconditioning is currently of special importance since the realization of the potential of endogenous protective effects extends possibilities for anti-ischemic protection of myocardium at different stages of CHD. Today, the main principles of preconditioning are purposefully applied to the development of therapeutic strategies for the treatment of CHD. The most widely used in the clinical practice are local and distant preconditioning modalities as well as preconditioning by physical exercises whose well-known protective effects are used in cardiosurgery and routine clinical practice. Elaboration of rehabilitative and preventive programs taking account of vaso- and cardioprotective effects of preconditioning may significantly increase the effectiveness of the rehabilitative treatment of CHD patients with poor organic coronary and myocardial reserve.
Kamon, M.; Phillips, J.R.
1994-12-31
In this paper techniques are presented for preconditioning equations generated by discretizing constrained vector integral equations associated with magnetoquasistatic analysis. Standard preconditioning approaches often fail on these problems. The authors present a specialized preconditioning technique and prove convergence bounds independent of the constraint equations and electromagnetic excitation frequency. Computational results from analyzing several electronic packaging examples are given to demonstrate that the new preconditioning approach can sometimes reduce the number of GMRES iterations by more than an order of magnitude.
Subspace-based additive fuzzy systems for classification and dimension reduction
NASA Astrophysics Data System (ADS)
Jauch, Thomas W.
1997-10-01
In classification tasks the appearance of high dimensional feature vectors and small datasets is a common problem. It is well known that these two characteristics usually result in an oversized model with poor generalization power. In this contribution a new way to cope with such tasks is presented which is based on the assumption that in high dimensional problems almost all data points are located in a low dimensional subspace. A way is proposed to design a fuzzy system on a unified framework, and to use it to develop a new model for classification tasks. It is shown that the new model can be understood as an additive fuzzy system with parameter based basis functions. Different parts of the models are only defined in a subspace of the whole feature space. The subspaces are not defined a priori but are subject to an optimization procedure as all other parameters of the model. The new model has the capability to cope with high feature dimensions. The model has similarities to projection pursuit and to the mixture of experts architecture. The model is trained in a supervised manner via conjugate gradients and logistic regression, or backfitting and conjugate gradients to handle classification tasks. An efficient initialization procedure is also presented. In addition a technique based on oblique projections is presented which enlarges the capabilities of the model to use data with missing features. It is possible to use data with missing features in the training and in the classification phase. Based on the design of the model, it is possible to prune certain basis functions with an OLS (orthogonal least squares) based technique in order to reduce the model size. Results are presented on an artificial and an application example.
A fast algorithm for the recursive calculation of dominant singular subspaces
NASA Astrophysics Data System (ADS)
Mastronardi, N.; van Barel, M.; Vandebril, R.
2008-09-01
In many engineering applications it is required to compute the dominant subspace of a matrix A of dimension m×n, with m[not double greater-than sign]n. Often the matrix A is produced incrementally, so all the columns are not available simultaneously. This problem arises, e.g., in image processing, where each column of the matrix A represents an image of a given sequence leading to a singular value decomposition-based compression [S. Chandrasekaran, B.S. Manjunath, Y.F. Wang, J. Winkeler, H. Zhang, An eigenspace update algorithm for image analysis, Graphical Models and Image Process. 59 (5) (1997) 321-332]. Furthermore, the so-called proper orthogonal decomposition approximation uses the left dominant subspace of a matrix A where a column consists of a time instance of the solution of an evolution equation, e.g., the flow field from a fluid dynamics simulation. Since these flow fields tend to be very large, only a small number can be stored efficiently during the simulation, and therefore an incremental approach is useful [P. Van Dooren, Gramian based model reduction of large-scale dynamical systems, in: Numerical Analysis 1999, Chapman & Hall, CRC Press, London, Boca Raton, FL, 2000, pp. 231-247]. In this paper an algorithm for computing an approximation of the left dominant subspace of size k of , with k[double less-than sign]m,n, is proposed requiring at each iteration O(mk+k2) floating point operations. Moreover, the proposed algorithm exhibits a lot of parallelism that can be exploited for a suitable implementation on a parallel computer.
Time-varying subspace dimensionality: Useful as a seismic signal detection method?
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.
2012-12-01
We explore the application of dimensional analysis to the problem of anomaly detection in multichannel time series. These techniques, which have been used for real-time load management in large computer systems, revolve around the time-varying dimensionality estimates of the signal subspace. Our application is to multiple channels of incoming seismic waveform data, as from a large array or across a network. Subspace analysis has been applied to seismic data before, but the routine use of the method is for the identification of a particular signal type, and requires a priori information about the range of signals for which the algorithm is searching. In this paradigm, a known but variable source (such as a mining region or aftershock sequence) provides known waveforms that are assumed to span the space occupied by incoming events of interest. Singular value decomposition or principal components analysis of the identified waveforms will allow for the selection of basis vectors that define the subspace onto which incoming signals are projected, to determine whether they belong to the source population of interest. In our application we do not seek to compare incoming signals to previously identified waveforms, but instead we seek to detect anomalies from the background behavior across an array or network. The background seismic levels will describe a signal space whose dimension may change markedly when an earthquake or other signal of interest occurs. We explore a variety of means by which we can evaluate the time-varying dimensionality of the signal space, and we compare the detection performance to other standard event detection methods.
NASA Astrophysics Data System (ADS)
Xu, Y.; Tuttas, S.; Heogner, L.; Stilla, U.
2016-06-01
This paper presents an approach for the classification of photogrammetric point clouds of scaffolding components in a construction site, aiming at making a preparation for the automatic monitoring of construction site by reconstructing an as-built Building Information Model (as-built BIM). The points belonging to tubes and toeboards of scaffolds will be distinguished via subspace clustering process and principal components analysis (PCA) algorithm. The overall workflow includes four essential processing steps. Initially, the spherical support region of each point is selected. In the second step, the normalized cut algorithm based on spectral clustering theory is introduced for the subspace clustering, so as to select suitable subspace clusters of points and avoid outliers. Then, in the third step, the feature of each point is calculated by measuring distances between points and the plane of local reference frame defined by PCA in cluster. Finally, the types of points are distinguished and labelled through a supervised classification method, with random forest algorithm used. The effectiveness and applicability of the proposed steps are investigated in both simulated test data and real scenario. The results obtained by the two experiments reveal that the proposed approaches are qualified to the classification of points belonging to linear shape objects having different shapes of sections. For the tests using synthetic point cloud, the classification accuracy can reach 80%, with the condition contaminated by noise and outliers. For the application in real scenario, our method can also achieve a classification accuracy of better than 63%, without using any information about the normal vector of local surface.
Support vector machine classifiers for large data sets.
Gertz, E. M.; Griffin, J. D.
2006-01-31
This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.
Argon Induces Protective Effects in Cardiomyocytes during the Second Window of Preconditioning
Mayer, Britta; Soppert, Josefin; Kraemer, Sandra; Schemmel, Sabrina; Beckers, Christian; Bleilevens, Christian; Rossaint, Rolf; Coburn, Mark; Goetzenich, Andreas; Stoppe, Christian
2016-01-01
Increasing evidence indicates that argon has organoprotective properties. So far, the underlying mechanisms remain poorly understood. Therefore, we investigated the effect of argon preconditioning in cardiomyocytes within the first and second window of preconditioning. Primary isolated cardiomyocytes from neonatal rats were subjected to 50% argon for 1 h, and subsequently exposed to a sublethal dosage of hypoxia (<1% O2) for 5 h either within the first (0–3 h) or second window (24–48 h) of preconditioning. Subsequently, the cell viability and proliferation was measured. The argon-induced effects were assessed by evaluation of mRNA and protein expression after preconditioning. Argon preconditioning did not show any cardioprotective effects in the early window of preconditioning, whereas it leads to a significant increase of cell viability 24 h after preconditioning compared to untreated cells (p = 0.015) independent of proliferation. Argon-preconditioning significantly increased the mRNA expression of heat shock protein (HSP) B1 (HSP27) (p = 0.048), superoxide dismutase 2 (SOD2) (p = 0.001), vascular endothelial growth factor (VEGF) (p < 0.001) and inducible nitric oxide synthase (iNOS) (p = 0.001). No difference was found with respect to activation of pro-survival kinases in the early and late window of preconditioning. The findings provide the first evidence of argon-induced effects on the survival of cardiomyocytes during the second window of preconditioning, which may be mediated through the induction of HSP27, SOD2, VEGF and iNOS. PMID:27447611
Argon Induces Protective Effects in Cardiomyocytes during the Second Window of Preconditioning.
Mayer, Britta; Soppert, Josefin; Kraemer, Sandra; Schemmel, Sabrina; Beckers, Christian; Bleilevens, Christian; Rossaint, Rolf; Coburn, Mark; Goetzenich, Andreas; Stoppe, Christian
2016-01-01
Increasing evidence indicates that argon has organoprotective properties. So far, the underlying mechanisms remain poorly understood. Therefore, we investigated the effect of argon preconditioning in cardiomyocytes within the first and second window of preconditioning. Primary isolated cardiomyocytes from neonatal rats were subjected to 50% argon for 1 h, and subsequently exposed to a sublethal dosage of hypoxia (<1% O₂) for 5 h either within the first (0-3 h) or second window (24-48 h) of preconditioning. Subsequently, the cell viability and proliferation was measured. The argon-induced effects were assessed by evaluation of mRNA and protein expression after preconditioning. Argon preconditioning did not show any cardioprotective effects in the early window of preconditioning, whereas it leads to a significant increase of cell viability 24 h after preconditioning compared to untreated cells (p = 0.015) independent of proliferation. Argon-preconditioning significantly increased the mRNA expression of heat shock protein (HSP) B1 (HSP27) (p = 0.048), superoxide dismutase 2 (SOD2) (p = 0.001), vascular endothelial growth factor (VEGF) (p < 0.001) and inducible nitric oxide synthase (iNOS) (p = 0.001). No difference was found with respect to activation of pro-survival kinases in the early and late window of preconditioning. The findings provide the first evidence of argon-induced effects on the survival of cardiomyocytes during the second window of preconditioning, which may be mediated through the induction of HSP27, SOD2, VEGF and iNOS. PMID:27447611
Argon Induces Protective Effects in Cardiomyocytes during the Second Window of Preconditioning.
Mayer, Britta; Soppert, Josefin; Kraemer, Sandra; Schemmel, Sabrina; Beckers, Christian; Bleilevens, Christian; Rossaint, Rolf; Coburn, Mark; Goetzenich, Andreas; Stoppe, Christian
2016-07-19
Increasing evidence indicates that argon has organoprotective properties. So far, the underlying mechanisms remain poorly understood. Therefore, we investigated the effect of argon preconditioning in cardiomyocytes within the first and second window of preconditioning. Primary isolated cardiomyocytes from neonatal rats were subjected to 50% argon for 1 h, and subsequently exposed to a sublethal dosage of hypoxia (<1% O₂) for 5 h either within the first (0-3 h) or second window (24-48 h) of preconditioning. Subsequently, the cell viability and proliferation was measured. The argon-induced effects were assessed by evaluation of mRNA and protein expression after preconditioning. Argon preconditioning did not show any cardioprotective effects in the early window of preconditioning, whereas it leads to a significant increase of cell viability 24 h after preconditioning compared to untreated cells (p = 0.015) independent of proliferation. Argon-preconditioning significantly increased the mRNA expression of heat shock protein (HSP) B1 (HSP27) (p = 0.048), superoxide dismutase 2 (SOD2) (p = 0.001), vascular endothelial growth factor (VEGF) (p < 0.001) and inducible nitric oxide synthase (iNOS) (p = 0.001). No difference was found with respect to activation of pro-survival kinases in the early and late window of preconditioning. The findings provide the first evidence of argon-induced effects on the survival of cardiomyocytes during the second window of preconditioning, which may be mediated through the induction of HSP27, SOD2, VEGF and iNOS.
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
Parallel preconditioning for the solution of nonsymmetric banded linear systems
Amodio, P.; Mazzia, F.
1994-12-31
Many computational techniques require the solution of banded linear systems. Common examples derive from the solution of partial differential equations and of boundary value problems. In particular the authors are interested in the parallel solution of block Hessemberg linear systems Gx = f, arising from the solution of ordinary differential equations by means of boundary value methods (BVMs), even if the considered preconditioning may be applied to any block banded linear system. BVMs have been extensively investigated in the last few years and their stability properties give promising results. A new class of BVMs called Reverse Adams, which are BV-A-stable for orders up to 6, and BV-A{sub 0}-stable for orders up to 9, have been studied.
Can endurance exercise preconditioning prevention disuse muscle atrophy?
Wiggs, Michael P.
2015-01-01
Emerging evidence suggests that exercise training can provide a level of protection against disuse muscle atrophy. Endurance exercise training imposes oxidative, metabolic, and heat stress on skeletal muscle which activates a variety of cellular signaling pathways that ultimately leads to the increased expression of proteins that have been demonstrated to protect muscle from inactivity –induced atrophy. This review will highlight the effect of exercise-induced oxidative stress on endogenous enzymatic antioxidant capacity (i.e., superoxide dismutase, glutathione peroxidase, and catalase), the role of oxidative and metabolic stress on PGC1-α, and finally highlight the effect heat stress and HSP70 induction. Finally, this review will discuss the supporting scientific evidence that these proteins can attenuate muscle atrophy through exercise preconditioning. PMID:25814955
Nitrite as a mediator of ischemic preconditioning and cytoprotection
Murillo, Daniel; Kamga, Christelle; Mo, Li; Shiva, Sruti
2011-01-01
Ischemia/reperfusion (IR) injury is a central component in the pathogenesis of several diseases and is a leading cause of morbidity and mortality in the western world. Subcellularly, mitochondrial dysfunction, characterized by depletion of ATP, calcium-induced opening of the mitochondrial permeability transition pore, and exacerbated reactive oxygen species (ROS) formation, plays an integral role in the progression of IR injury. Nitric oxide (NO) and more recently nitrite (NO2-) are known to modulate mitochondrial function, mediate cytoprotection after IR and have been implicated in the signaling of the highly protective ischemic preconditioning (IPC) program. Here, we review what is known about the role of NO and nitrite in cytoprotection after IR and consider the putative role of nitrite in IPC. Focus is placed on the potential cytoprotective mechanisms involving NO and nitrite-dependent modulation of mitochondrial function. PMID:21277988
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
Can anaerobic performance be improved by remote ischemic preconditioning?
Lalonde, François; Curnier, Daniel Y
2015-01-01
Remote ischemic preconditioning (RIPC) provides a substantial benefit for heart protection during surgery. Recent literature on RIPC reveals the potential to benefit the enhancement of sports performance as well. The aim of this study was to investigate the effect of RIPC on anaerobic performance. Seventeen healthy participants who practice regular physical activity participated in the project (9 women and 8 men, mean age 28 ± 8 years). The participants were randomly assigned to an RIPC intervention (four 5-minute cycles of ischemia reperfusion, followed by 5 minutes using a pressure cuff) or a SHAM intervention in a crossover design. After the intervention, the participants were tested for alactic anaerobic performance (6 seconds of effort) followed by a Wingate test (lactic system) on an electromagnetic cycle ergometer. The following parameters were evaluated: average power, peak power, the scale of perceived exertion, fatigue index (in watt per second), peak power (in Watt), time to reach peak power (in seconds), minimum power (in Watt), the average power-to-weight ratio (in watt per kilogram), and the maximum power-to-weight ratio (in watt per kilogram). The peak power for the Wingate test is 794 W for RIPC and 777 W for the control group (p = 0.208). The average power is 529 W (RIPC) vs. 520 W for controls (p = 0.079). Perceived effort for RIPC is 9/10 on the Borg scale vs. 10/10 for the control group (p = 0.123). Remote ischemic preconditioning does not offer any significant benefits for anaerobic performance.
Preconditioned iterative methods for inhomogeneous acoustic scattering applications
NASA Astrophysics Data System (ADS)
Sifuentes, Josef
This thesis develops and analyzes efficient iterative methods for solving discretizations of the Lippmann--Schwinger integral equation for inhomogeneous acoustic scattering. Analysis and numerical illustrations of the spectral properties of the scattering problem demonstrate that a significant portion of the spectrum is approximated well on coarse grids. To exploit this, I develop a novel restarted GMRES method with adaptive deflation preconditioning based on spectral approximations on multiple grids. Much of the literature in this field is based on exact deflation, which is not feasible for most practical computations. This thesis provides an analytical framework for general approximate deflation methods and suggests a way to rigorously study a host of inexactly-applied preconditioners. Approximate deflation algorithms are implemented for scattering through thin inhomogeneities in photonic band gap problems. I also develop a short term recurrence for solving the one dimensional version of the problem that exploits the observation that the integral operator is a low rank perturbation of a self-adjoint operator. This method is based on strategies for solving Schur complement problems, and provides an alternative to a recent short term recurrence algorithm for matrices with such structure that we show to be numerically unstable for this application. The restarted GMRES method with adaptive deflation preconditioning over multiple grids, as well as the short term recurrence method for operators with low rank skew-adjoint parts, are very effective for reducing both the computational time and computer memory required to solve acoustic scattering problems. Furthermore, the methods are sufficiently general to be applicable to a wide class of problems.
Stochastic subspace identification for operational modal analysis of an arch bridge
NASA Astrophysics Data System (ADS)
Loh, Chin-Hsiung; Chen, Ming-Che; Chao, Shu-Hsien
2012-04-01
In this paer the application of output-only system identification technique, known as Stochastic Subspace Identification (SSI) algorithms, for civil infrastructures is carried out. The ability of covariance driven stochastic subspace identification (SSI-COV) was proved through the analysis of the ambient data of an arch bridge under operational condition. A newly developed signal processing technique, Singular Spectrum analysis (SSA), capable to smooth noisy signals, is adopted for pre-processing the recorded data before the SSI. The conjunction of SSA and SSICOV provides a useful criterion for the system order determination. With the aim of estimating accurate modal parameters of the structure in off-line analysis, a stabilization diagram is constructed by plotting the identified poles of the system with increasing the size of data Hankel matrix. Identification task of a real structure, Guandu Bridge, is carried out to identify the system natural frequencies and mode shapes. The uncertainty of the identified model parameters from output-only measurement of the bridge under operation condition, such as temperature and traffic loading conditions, is discussed.
A Tensor-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-01-01
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313
Adaptive subspace detection of extended target in white Gaussian noise using sinc basis
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Wei; Li, Ming; Qu, Jian-She; Yang, Hui
2016-01-01
For the high resolution radar (HRR), the problem of detecting the extended target is considered in this paper. Based on a single observation, a new two-step detection based on sparse representation (TSDSR) method is proposed to detect the extended target in the presence of Gaussian noise with unknown covariance. In the new method, the Sinc dictionary is introduced to sparsely represent the high resolution range profile (HRRP). Meanwhile, adaptive subspace pursuit (ASP) is presented to recover the HRRP embedded in the Gaussian noise and estimate the noise covariance matrix. Based on the Sinc dictionary and the estimated noise covariance matrix, one step subspace detector (OSSD) for the first-order Gaussian (FOG) model without secondary data is adopted to realise the extended target detection. Finally, the proposed TSDSR method is applied to raw HRR data. Experimental results demonstrate that HRRPs of different targets can be sparsely represented very well with the Sinc dictionary. Moreover, the new method can estimate the noise power with tiny errors and have a good detection performance.
A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-02-25
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.
NASA Astrophysics Data System (ADS)
Oweiss, Karim G.; Anderson, David J.
2006-12-01
We investigate a new approach for the problem of source separation in correlated multichannel signal and noise environments. The framework targets the specific case when nonstationary correlated signal sources contaminated by additive correlated noise impinge on an array of sensors. Existing techniques targeting this problem usually assume signal sources to be independent, and the contaminating noise to be spatially and temporally white, thus enabling orthogonal signal and noise subspaces to be separated using conventional eigendecomposition. In our context, we propose a solution to the problem when the sources are nonorthogonal, and the noise is correlated with an unknown temporal and spatial covariance. The approach is based on projecting the observations onto a nested set of multiresolution spaces prior to eigendecomposition. An inherent invariance property of the signal subspace is observed in a subset of the multiresolution spaces that depends on the degree of approximation expressed by the orthogonal basis. This feature, among others revealed by the algorithm, is eventually used to separate the signal sources in the context of "best basis" selection. The technique shows robustness to source nonstationarities as well as anisotropic properties of the unknown signal propagation medium under no constraints on the array design, and with minimal assumptions about the underlying signal and noise processes. We illustrate the high performance of the technique on simulated and experimental multichannel neurophysiological data measurements.
NASA Astrophysics Data System (ADS)
Tonkin, Matthew; Doherty, John
2009-12-01
We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models. When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization. In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD "superparameter" approach described by Tonkin and Doherty (2005). Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations. Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters. This difference is projected onto the calibration null-space and added to the calibrated parameters. If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values. The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs. Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets.
Multicomponent dynamics of coupled quantum subspaces and field-induced molecular ionizations
Nguyen-Dang, Thanh-Tung; Viau-Trudel, Jérémy
2013-12-28
To describe successive ionization steps of a many-electron atom or molecule driven by an ultrashort, intense laser pulse, we introduce a hierarchy of successive two-subspace Feshbach partitions of the N-electron Hilbert space, and solve the partitioned time-dependent Schrödinger equation by a short-time unitary algorithm. The partitioning scheme allows one to use different level of theory to treat the many-electron dynamics in different subspaces. We illustrate the procedure on a simple two-active-electron model molecular system subjected to a few-cycle extreme Ultra-Violet (XUV) pulse to study channel-resolved photoelectron spectra as a function of the pulse's central frequency and duration. We observe how the momentum and kinetic-energy distributions of photoelectrons accompanying the formation of the molecular cation in a given electronic state (channel) change as the XUV few-cycle pulse's width is varied, from a form characteristic of an impulsive ionization regime, corresponding to the limit of a delta-function pulse, to a form characteristic of multiphoton above-threshold ionization, often associated with continuous-wave infinitely long pulse.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie
2014-02-01
Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.
3D deformable image matching: a hierarchical approach over nested subspaces
NASA Astrophysics Data System (ADS)
Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul
2000-06-01
This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.
Renaut, R.; He, Q.
1994-12-31
In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.
A subspace model-based approach to face relighting under unknown lighting and poses.
Shim, Hyunjung; Luo, Jiebo; Chen, Tsuhan
2008-08-01
We present a new approach to face relighting by jointly estimating the pose, reflectance functions, and lighting from as few as one image of a face. Upon such estimation, we can synthesize the face image under any prescribed new lighting condition. In contrast to commonly used face shape models or shape-dependent models, we neither recover nor assume the 3-D face shape during the estimation process. Instead, we train a pose- and pixel-dependent subspace model of the reflectance function using a face database that contains samples of pose and illumination for a large number of individuals (e.g., the CMU PIE database and the Yale database). Using this subspace model, we can estimate the pose, the reflectance functions, and the lighting condition of any given face image. Our approach lends itself to practical applications thanks to many desirable properties, including the preservation of the non-Lambertian skin reflectance properties and facial hair, as well as reproduction of various shadows on the face. Extensive experiments show that, compared to recent representative face relighting techniques, our method successfully produces better results, in terms of subjective and objective quality, without reconstructing a 3-D shape. PMID:18632343
Wavelet subspace decomposition of thermal infrared images for defect detection in artworks
NASA Astrophysics Data System (ADS)
Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.
2016-07-01
Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.
Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces
Vourdas, A.
2014-08-15
The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H{sub 1},H{sub 2}), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H{sub 1}),P(H{sub 2}), to the subspaces H{sub 1}, H{sub 2}. As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities.
Portnichenko, A G; Lapikova-Briginskaia, T Iu; Vasilenko, M I; Portnichenko, G V; Maslov, L N; Moĭbenko, A A
2013-01-01
Activation of Akt-dependent mechanisms may play a significant role in the cellular response under hypoxic preconditioning and myocardial remodeling. The impact of hypoxic preconditioning, and remodeling on the expression of Akt kinase in the heart ventricles was investigated. Wistar male rats, the residents of plains or middle altitude (2100 m above sea level), were exposed to hypoxic preconditioning by "lifting" in the barochamber at the "height" of 5,600 m in 3 h. In the right and left ventricles of the heart, Akt protein expression was determined by Western blotting. It was shown, that hypoxic preconditioning causes the induction of Akt kinase in the ventricles during the period of delayed cardioprotection (1-3 days after preconditioning). Myocardial remodeling induced by chronic hypoxia in middle altitude was associated with elevated Akt expression in the myocardium, more pronounced in the left ventricle. Progression of hypoxic myocardial remodeling found in part of the animals was accompanied by a reduction of the cell hypoxic reactivity, including Akt induction in response to preconditioning. Thus, Akt kinase is involved in the mechanisms of hypoxia induced late preconditioning and myocardial remodeling in chronic hypoxia. Inhibitory regulatory mechanism was found to limit the induction of Akt in myocardium after remodeling.
Effect of ozone oxidative preconditioning in preventing early radiation-induced lung injury in rats.
Bakkal, B H; Gultekin, F A; Guven, B; Turkcu, U O; Bektas, S; Can, M
2013-09-01
Ionizing radiation causes its biological effects mainly through oxidative damage induced by reactive oxygen species. Previous studies showed that ozone oxidative preconditioning attenuated pathophysiological events mediated by reactive oxygen species. As inhalation of ozone induces lung injury, the aim of this study was to examine whether ozone oxidative preconditioning potentiates or attenuates the effects of irradiation on the lung. Rats were subjected to total body irradiation, with or without treatment with ozone oxidative preconditioning (0.72 mg/kg). Serum proinflammatory cytokine levels, oxidative damage markers, and histopathological analysis were compared at 6 and 72 h after total body irradiation. Irradiation significantly increased lung malondialdehyde levels as an end-product of lipoperoxidation. Irradiation also significantly decreased lung superoxide dismutase activity, which is an indicator of the generation of oxidative stress and an early protective response to oxidative damage. Ozone oxidative preconditioning plus irradiation significantly decreased malondialdehyde levels and increased the activity of superoxide dismutase, which might indicate protection of the lung from radiation-induced lung injury. Serum tumor necrosis factor alpha and interleukin-1 beta levels, which increased significantly following total body irradiation, were decreased with ozone oxidative preconditioning. Moreover, ozone oxidative preconditioning was able to ameliorate radiation-induced lung injury assessed by histopathological evaluation. In conclusion, ozone oxidative preconditioning, repeated low-dose intraperitoneal administration of ozone, did not exacerbate radiation-induced lung injury, and, on the contrary, it provided protection against radiation-induced lung damage.
NASA Astrophysics Data System (ADS)
Warner, Dennis B.
1984-02-01
Recognition of the socioeconomic preconditions for successful rural water-supply and sanitation projects in developing countries is the key to identifying a new project. Preconditions are the social, economic and technical characteristics defining the project environment. There are two basic types of preconditions: those existing at the time of the initial investigation and those induced by subsequent project activities. Successful project identification is dependent upon an accurate recognition of existing constraints and a carefully tailored package of complementary investments intended to overcome the constraints. This paper discusses the socioeconomic aspects of preconditions in the context of a five-step procedure for project identification. The procedure includes: (1) problem identification; (2) determination of socioeconomic status; (3) technology selection; (4) utilization of support conditions; and (5) benefit estimation. Although the establishment of specific preconditions should be based upon the types of projects likely to be implemented, the paper outlines a number of general relationships regarding favourable preconditions in water and sanitation planning. These relationships are used within the above five-step procedure to develop a set of general guidelines for the application of preconditions in the identification of rural water-supply and sanitation projects.
Studies on effect of stress preconditioning in restrain stress-induced behavioral alterations.
Kaur, Rajneet; Jaggi, Amteshwar Singh; Singh, Nirmal
2010-02-01
Stress preconditioning has been documented to confer on gastroprotective effects on stress-induced gastric ulcerations. However, the effects of prior exposure of stress preconditioning episodes on stress-induced behavioral changes have not been explored yet. Therefore the present study was designed to investigate the ameliorative effects of stress preconditioning in immobilization stress-induced behavioral alterations in rats. The rats were subjected to restrain stress by placing in restrainer (5.5 cm in diameter and 18 cm in length) for 3.5 h. Stress preconditioning was induced by subjecting the rats to two cycles of restraint and restrain-free periods of 15 min each. Furthermore, a similar type of stress preconditioning was induced using different time cycles of 30 and 45 min. The extent and severity of the stress-induced behavioral alterations were assessed using different behavioral tests such as hole-board test, social interaction test, open field test, and actophotometer. Restrain stress resulted in decrease in locomotor activity, frequency of head dips and rearing in hole board, line crossing and rearing in open field, and decreased following and increased avoidance in social interaction test. Stress preconditioning with two cycles of 15, 30 or 45 min respectively, did not attenuate stress-induced behavioral changes to any extent. It may be concluded that stress preconditioning does not seem to confer any protective effect in modulating restrain stress-induced behavioral alterations.
NASA Astrophysics Data System (ADS)
Wang, De-jun; Li, Feng-hua
2010-09-01
It has been proved theoretically that two incompletely correlated sources can be identified by linear signal processing methods. However, it is difficult in practice. A new method to separate two wideband sources with one vector sensor is presented in this paper. The method is the combination of subspace rotation and spatial matched filter. Simulations show that this method is insensitive to the initial azimuth error, independent of signal spectrum, and better man wideband focusing subspace methods at low SNR. The sea trial is performed and the experiment results show that the proposed method is effective to separate and track two wideband sources in the underwater environment.
Schatzmann, L; Brunner, P; Stäubli, H U
1998-01-01
Preconditioning of soft tissues has become a common procedure in tensile testing to assess the history dependence of these viscoelastic materials. To our knowledge, this is the first study comparing tensile properties of soft tissues-before and after cyclic preconditioning with high loads. Sixteen quadriceps tendon-bone (QT-B) complexes and 16 patellar ligament-bone (PL-B) complexes from a young population (mean age 24.9 +/- 4.4 years) were loaded to failure with a deformation rate of 1 mm/s. Half of the QT-B and the PL-B complexes underwent 200 uniaxial preconditioning cycles from 75 to 800 N at 0.5 Hz before ultimate failure loading. High-load preconditioning was made possible by the development of a highly reliable and easy-to-use cryofixation device to attach the free tendon end. PL-B complexes were more influenced by preconditioning than the QT-B complexes. Ultimate failure load, stiffness at 200 N and stiffness at 800 N were significantly higher for PL-B complexes after preconditioning, while the structural properties of QT-B complexes exhibited no significant alterations. The values of the mechanical properties like Young's modulus at 200 N and 800 N were much higher for both preconditioned specimen groups. In addition, ultimate stress was augmented by preconditioning for PL-B complexes. Hysteresis and creep effects were highest during the first few loading cycles. More than 160 cycles were needed to reach a steady state. Beyond 160 cycles there was no further creep, and hysteresis was almost constant. Creep values were 2.2% of the initial testing length for the QT-B and 3.2% of the initial testing length for the PL-B complexes. The effect of cyclic preconditioning seems to be caused by progressive fiber recruitment and by alterations of the interstitial fluid milieu.
Stetler, R. Anne; Leak, Rehana K.; Gan, Yu; Li, Peiying; Hu, Xiaoming; Jing, Zheng; Chen, Jun; Zigmond, Michael J.; Gao, Yanqin
2014-01-01
Preconditioning is a phenomenon in which brief episodes of a sublethal insult induce robust protection against subsequent lethal injuries. Preconditioning has been observed in multiple organisms and can occur in the brain as well as other tissues. Extensive animal studies suggest that the brain can be preconditioned to resist acute injuries, such as ischemic stroke, neonatal hypoxia/ischemia, trauma, and agents that are used in models of neurodegenerative diseases, such as Parkinson’s disease and Alzheimer’s disease. Effective preconditioning stimuli are numerous and diverse, ranging from transient ischemia, hypoxia, hyperbaric oxygen, hypothermia and hyperthermia, to exposure to neurotoxins and pharmacological agents. The phenomenon of “cross-tolerance,” in which a sublethal stress protects against a different type of injury, suggests that different preconditioning stimuli may confer protection against a wide range of injuries. Research conducted over the past few decades indicates that brain preconditioning is complex, involving multiple effectors such as metabolic inhibition, activation of extra- and intracellular defense mechanisms, a shift in the neuronal excitatory/inhibitory balance, and reduction in inflammatory sequelae. An improved understanding of brain preconditioning should help us identify innovative therapeutic strategies that prevent or at least reduce neuronal damage in susceptible patients. In this review, we focus on the experimental evidence of preconditioning in the brain and systematically survey the models used to develop paradigms for neuroprotection, and then discuss the clinical potential of brain preconditioning. In a subsequent components of this two-part series, we will discuss the cellular and molecular events that are likely to underlie these phenomena. PMID:24389580
Characteristic time-stepping or local preconditioning of the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Lee, Wen-Tzong; Roe, Philip L.
1991-01-01
A derivation is presented of a local preconditioning matrix for multidimensional Euler equations, that reduces the spread of the characteristic speeds to the lowest attainable value. Numerical experiments with this preconditioning matrix are applied to an explicit upwind discretization of the two-dimensional Euler equations, showing that this matrix significantly increases the rate of convergence to a steady solution. It is predicted that local preconditioning will also simplify convergence-acceleration boundary procedures such as the Karni (1991) procedure for the far field and the Mazaheri and Roe (1991) procedure for a solid wall.
Eigenmode Analysis of Boundary Conditions for One-Dimensional Preconditioned Euler Equations
NASA Technical Reports Server (NTRS)
Darmofal, David L.
1998-01-01
An analysis of the effect of local preconditioning on boundary conditions for the subsonic, one-dimensional Euler equations is presented. Decay rates for the eigenmodes of the initial boundary value problem are determined for different boundary conditions. Riemann invariant boundary conditions based on the unpreconditioned Euler equations are shown to be reflective with preconditioning, and, at low Mach numbers, disturbances do not decay. Other boundary conditions are investigated which are non-reflective with preconditioning and numerical results are presented confirming the analysis.
Chrenek, Micah A; Sellers, Jana T; Lawson, Eric C; Cunha, Priscila P; Johnson, Jessica L; Girardot, Preston E; Kendall, Cristina; Han, Moon K; Hanif, Adam; Ciavatta, Vincent T; Gogniat, Marissa A; Nickerson, John M; Pardue, Machelle T; Boatright, Jeffrey H
2016-01-01
To compare patterns of gene expression following preconditioning cyclic light rearing versus preconditioning aerobic exercise. BALB/C mice were preconditioned either by rearing in 800 lx 12:12 h cyclic light for 8 days or by running on treadmills for 9 days, exposed to toxic levels of light to cause light-induced retinal degeneration (LIRD), then sacrificed and retinal tissue harvested. Subsets of mice were maintained for an additional 2 weeks and for assessment of retinal function by electroretinogram (ERG). Both preconditioning protocols partially but significantly preserved retinal function and morphology and induced similar leukemia inhibitory factor (LIF) gene expression pattern. The data demonstrate that exercise preconditioning and cyclic light preconditioning protect photoreceptors against LIRD and evoke a similar pattern of retinal LIF gene expression. It may be that similar stress response pathways mediate the protection provided by the two preconditioning modalities.
Bernsen, Erik; Dijkstra, Henk A.; Thies, Jonas; Wubs, Fred W.
2010-10-20
In present-day forward time stepping ocean-climate models, capturing both the wind-driven and thermohaline components, a substantial amount of CPU time is needed in a so-called spin-up simulation to determine an equilibrium solution. In this paper, we present methodology based on Jacobian-Free Newton-Krylov methods to reduce the computational time for such a spin-up problem. We apply the method to an idealized configuration of a state-of-the-art ocean model, the Modular Ocean Model version 4 (MOM4). It is shown that a typical speed-up of a factor 10-25 with respect to the original MOM4 code can be achieved and that this speed-up increases with increasing horizontal resolution.
EEG Subspace Analysis and Classification Using Principal Angles for Brain-Computer Interfaces
NASA Astrophysics Data System (ADS)
Ashari, Rehab Bahaaddin
Brain-Computer Interfaces (BCIs) help paralyzed people who have lost some or all of their ability to communicate and control the outside environment from loss of voluntary muscle control. Most BCIs are based on the classification of multichannel electroencephalography (EEG) signals recorded from users as they respond to external stimuli or perform various mental activities. The classification process is fraught with difficulties caused by electrical noise, signal artifacts, and nonstationarity. One approach to reducing the effects of similar difficulties in other domains is the use of principal angles between subspaces, which has been applied mostly to video sequences. This dissertation studies and examines different ideas using principal angles and subspaces concepts. It introduces a novel mathematical approach for comparing sets of EEG signals for use in new BCI technology. The success of the presented results show that principal angles are also a useful approach to the classification of EEG signals that are recorded during a BCI typing application. In this application, the appearance of a subject's desired letter is detected by identifying a P300-wave within a one-second window of EEG following the flash of a letter. Smoothing the signals before using them is the only preprocessing step that was implemented in this study. The smoothing process based on minimizing the second derivative in time is implemented to increase the classification accuracy instead of using the bandpass filter that relies on assumptions on the frequency content of EEG. This study examines four different ways of removing outliers that are based on the principal angles and shows that the outlier removal methods did not help in the presented situations. One of the concepts that this dissertation focused on is the effect of the number of trials on the classification accuracies. The achievement of the good classification results by using a small number of trials starting from two trials only
Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.
2013-12-01
Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for
Kanwal, Abhinav; Kasetti, Sujatha; Putcha, Uday Kumar; Asthana, Shailendra; Banerjee, Sanjay K
2016-01-01
The concept of cardioprotection through preconditioning against ischemia–reperfusion (I/R) injury is well known and established. However, among different proposed mechanisms regarding the concept of ischemic preconditioning, protein kinase C (PKC)-mediated cardioprotection through ischemic preconditioning plays a key role in myocardial I/R injury. Thus, this study was designed to find the relationship between PKC and sodium glucose transporter 1 (SGLT1) in preconditioning-induced cardioprotection, which is ill reported till now. By applying a multifaceted approach, we demonstrated that PKC activates SGLT1, which curbed oxidative stress and apoptosis against I/R injury. PKC activation enhances cardiac glucose uptake through SGLT1 and seems essential in preventing I/R-induced cardiac injury, indicating a possible cross-talk between PKC and SGLT1.
Kanwal, Abhinav; Kasetti, Sujatha; Putcha, Uday Kumar; Asthana, Shailendra; Banerjee, Sanjay K
2016-01-01
The concept of cardioprotection through preconditioning against ischemia–reperfusion (I/R) injury is well known and established. However, among different proposed mechanisms regarding the concept of ischemic preconditioning, protein kinase C (PKC)-mediated cardioprotection through ischemic preconditioning plays a key role in myocardial I/R injury. Thus, this study was designed to find the relationship between PKC and sodium glucose transporter 1 (SGLT1) in preconditioning-induced cardioprotection, which is ill reported till now. By applying a multifaceted approach, we demonstrated that PKC activates SGLT1, which curbed oxidative stress and apoptosis against I/R injury. PKC activation enhances cardiac glucose uptake through SGLT1 and seems essential in preventing I/R-induced cardiac injury, indicating a possible cross-talk between PKC and SGLT1. PMID:27695290
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.; Chima, Rodrick V.; Turkel, Eli
1997-01-01
A preconditioning scheme has been implemented into a three-dimensional viscous computational fluid dynamics code for turbomachine blade rows. The preconditioning allows the code, originally developed for simulating compressible flow fields, to be applied to nearly-incompressible, low Mach number flows. A brief description is given of the compressible Navier-Stokes equations for a rotating coordinate system, along with the preconditioning method employed. Details about the conservative formulation of artificial dissipation are provided, and different artificial dissipation schemes are discussed and compared. The preconditioned code was applied to a well-documented case involving the NASA large low-speed centrifugal compressor for which detailed experimental data are available for comparison. Performance and flow field data are compared for the near-design operating point of the compressor, with generally good agreement between computation and experiment. Further, significant differences between computational results for the different numerical implementations, revealing different levels of solution accuracy, are discussed.
Cai, Yunfeng; Bai, Zhaojun; Pask, John E.; Sukumar, N.
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal block preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.
Pehlevan, Cengiz; Hu, Tao; Chklovskii, Dmitri B
2015-07-01
Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.
Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian
2011-05-01
The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.
Application of a Subspace-Based Fault Detection Method to Industrial Structures
NASA Astrophysics Data System (ADS)
Mevel, L.; Hermans, L.; van der Auweraer, H.
1999-11-01
Early detection and localization of damage allow increased expectations of reliability, safety and reduction of the maintenance cost. This paper deals with the industrial validation of a technique to monitor the health of a structure in operating conditions (e.g. rotating machinery, civil constructions subject to ambient excitations, etc.) and to detect slight deviations in a modal model derived from in-operation measured data. In this paper, a statistical local approach based on covariance-driven stochastic subspace identification is proposed. The capabilities and limitations of the method with respect to health monitoring and damage detection are discussed and it is explained how the method can be practically used in industrial environments. After the successful validation of the proposed method on a few laboratory structures, its application to a sports car is discussed. The example illustrates that the method allows the early detection of a vibration-induced fatigue problem of a sports car.
Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han
2015-04-08
Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model.
Engineering of a quantum state by time-dependent decoherence-free subspaces
NASA Astrophysics Data System (ADS)
Wu, S. L.
2015-03-01
We apply the time-dependent decoherence-free subspace theory to a Markovian open quantum system in order to present a proposal for a quantum-state engineering program. By quantifying the purity of the quantum state, we verify that the quantum-state engineering process designed via our method is completely unitary within any total engineering time. Even though the controls on the open quantum system are not perfect, the asymptotic purity is still robust. Owing to its ability to completely resist decoherence and the lack of restraint in terms of the total engineering time, our proposal is suitable for multitask quantum-state engineering program. Therefore, this proposal is not only useful for achieving the quantum-state engineering program experimentally, it also helps us build both a quantum simulation and quantum information equipment in reality.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
Robust Quantum-Network Memory Using Decoherence-Protected Subspaces of Nuclear Spins
NASA Astrophysics Data System (ADS)
Reiserer, Andreas; Kalb, Norbert; Blok, Machiel S.; van Bemmelen, Koen J. M.; Taminiau, Tim H.; Hanson, Ronald; Twitchen, Daniel J.; Markham, Matthew
2016-04-01
The realization of a network of quantum registers is an outstanding challenge in quantum science and technology. We experimentally investigate a network node that consists of a single nitrogen-vacancy center electronic spin hyperfine coupled to nearby nuclear spins. We demonstrate individual control and readout of five nuclear spin qubits within one node. We then characterize the storage of quantum superpositions in individual nuclear spins under repeated application of a probabilistic optical internode entangling protocol. We find that the storage fidelity is limited by dephasing during the electronic spin reset after failed attempts. By encoding quantum states into a decoherence-protected subspace of two nuclear spins, we show that quantum coherence can be maintained for over 1000 repetitions of the remote entangling protocol. These results and insights pave the way towards remote entanglement purification and the realization of a quantum repeater using nitrogen-vacancy center quantum-network nodes.
Joint DOA and multi-pitch estimation based on subspace techniques
NASA Astrophysics Data System (ADS)
Xi Zhang, Johan; Christensen, Mads Græsbøll; Jensen, Søren Holdt; Moonen, Marc
2012-12-01
In this article, we present a novel method for high-resolution joint direction-of-arrivals (DOA) and multi-pitch estimation based on subspaces decomposed from a spatio-temporal data model. The resulting estimator is termed multi-channel harmonic MUSIC (MC-HMUSIC). It is capable of resolving sources under adverse conditions, unlike traditional methods, for example when multiple sources are impinging on the array from approximately the same angle or similar pitches. The effectiveness of the method is demonstrated on a simulated an-echoic array recordings with source signals from real recorded speech and clarinet. Furthermore, statistical evaluation with synthetic signals shows the increased robustness in DOA and fundamental frequency estimation, as compared with to a state-of-the-art reference method.
Bayesian estimation of Karhunen-Loève expansions; A random subspace approach
NASA Astrophysics Data System (ADS)
Chowdhary, Kenny; Najm, Habib N.
2016-08-01
One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.
Global spatial sensitivity of runoff to subsurface permeability using the active subspace method
NASA Astrophysics Data System (ADS)
Gilbert, James M.; Jefferson, Jennifer L.; Constantine, Paul G.; Maxwell, Reed M.
2016-06-01
Hillslope scale runoff is generated as a result of interacting factors that include water influx rate, surface and subsurface properties, and antecedent saturation. Heterogeneity of these factors affects the existence and characteristics of runoff. This heterogeneity becomes an increasingly relevant consideration as hydrologic models are extended and employed to capture greater detail in runoff generating processes. We investigate the impact of one type of heterogeneity - subsurface permeability - on runoff using the integrated hydrologic model ParFlow. Specifically, we examine the sensitivity of runoff to variation in three-dimensional subsurface permeability fields for scenarios dominated by either Hortonian or Dunnian runoff mechanisms. Ten thousand statistically consistent subsurface permeability fields are parameterized using a truncated Karhunen-Loéve (KL) series and used as inputs to 48-h simulations of integrated surface-subsurface flow in an idealized 'tilted-v' domain. Coefficients of the spatial modes of the KL permeability fields provide the parameter space for analysis using the active subspace method. The analysis shows that for Dunnian-dominated runoff conditions the cumulative runoff volume is sensitive primarily to the first spatial mode, corresponding to permeability values in the center of the three-dimensional model domain. In the Hortonian case, runoff volume is sensitive to multiple smaller-scale spatial modes and the locus of that sensitivity is in the near-surface zone upslope from the domain outlet. Variation in runoff volume resulting from random heterogeneity configurations can be expressed as an approximately univariate function of the active variable, a weighted combination of spatial parameterization coefficients computed through the active subspace method. However, this relationship between the active variable and runoff volume is more well-defined for Dunnian runoff than for the Hortonian scenario.
Application of the adaptive subspace detector to Raman spectra for biological threat detection
NASA Astrophysics Data System (ADS)
Russell, Thomas A.; Borchardt, Steven; Anderson, Richard; Treado, Patrick; Neiss, Jason
2006-10-01
Effective application of point detectors in the field to monitor the air for biological attack imposes a challenging set of requirements on threat detection algorithms. Raman spectra exhibit features that discriminate between threats and non-threats, and such spectra can be collected quickly, offering a potential solution given the appropriate algorithm. The algorithm must attempt to match to known threat signatures, while suppressing the background clutter in order to produce acceptable Receiver Operating Characteristic (ROC) curves. The radar space-time adaptive processing (STAP) community offers a set of tools appropriate to this problem, and these have recently crossed over into hyperspectral imaging (HSI) applications. The Adaptive Subspace Detector (ASD) is the Generalized Likelihood Ratio Test (GLRT) detector for structured backgrounds (which we expect for Raman background spectra) and mixed pixels, and supports the necessary adaptation to varying background environments. The structured background model reduces the training required for that adaptation, and the number of statistical assumptions required. We applied the ASD to large Raman spectral databases collected by ChemImage, developed spectral libraries of threat signatures and several backgrounds, and tested the algorithm against individual and mixture spectra, including in blind tests. The algorithm was successful in detecting threats, however, in order to maintain the desired false alarm rate, it was necessary to shift the decision threshold so as to give up some detection sensitivity. This was due to excess spread of the detector histograms, apparently related to variability in the signatures not captured by the subspaces, and evidenced by non-Gaussian residuals. We present here performance modeling, test data, algorithm and sensor performance results, and model validation conclusions.
Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang
2007-08-01
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.
do Amaral e Silva Müller, Gabrielle; Vandresen-Filho, Samuel; Tavares, Carolina Pereira; Menegatti, Angela C O; Terenzi, Hernán; Tasca, Carla Inês; Severino, Patricia Cardoso
2013-05-01
Preconditioning induced by N-methyl-D-aspartate (NMDA) has been used as a therapeutic tool against later neuronal insults. NMDA preconditioning affords neuroprotection against convulsions and cellular damage induced by the NMDA receptor agonist, quinolinic acid (QA) with time-window dependence. This study aimed to evaluate the molecular alterations promoted by NMDA and to compare these alterations in different periods of time that are related to the presence or lack of neuroprotection. Putative mechanisms related to NMDA preconditioning were evaluated via a proteomic analysis by using a time-window study. After a subconvulsant and protective dose of NMDA administration mice, hippocampi were removed (1, 24 or 72 h) and total protein analyzed by 2DE gels and identified by MALDI-TOF. Differential protein expression among the time induction of NMDA preconditioning was observed. In the hippocampus of protected mice (24 h), four proteins: HSP70(B), aspartyl-tRNA synthetase, phosphatidylethanolamine binding protein and creatine kinase were found to be up-regulated. Two other proteins, HSP70(A) and V-type proton ATPase were found down-regulated. Proteomic analysis showed that the neuroprotection induced by NMDA preconditioning altered signaling pathways, cell energy maintenance and protein synthesis and processing. These events may occur in a sense to attenuate the excitotoxicity process during the activation of neuroprotection promoted by NMDA preconditioning.
Synergistic effect of high-affinity binding and flow preconditioning on endothelial cell adhesion.
Mathur, Anshu B; Truskey, George A; Reichert, William M
2003-01-01
The current study examined whether the combined introduction of high-affinity avidin-biotin bonds and fibronectin-integrin bonds (i.e., dual ligand treatment) would further augment the adhesion of flow-preconditioned endothelial cells to model substrates via contributions to the actin cytoskeleton and the formation of focal contacts. Human umbilical vein endothelial cells (HUVEC) were grown under static conditions or exposed to a flow-preconditioning regimen for 24 h. Cell retention was determined by exposure to 75 dynes/cm(2). The combination of flow preconditioning and the dual ligand treatment yielded higher cell retention under flow compared to the cells adherent via fibronectin-integrin bonds only. This increase in adhesion strength correlated with a greater focal contact area. Elongation of the HUVEC occurred after exposure to flow preconditioning; however, orientation of dual ligand adherent cells was restricted due to the presence of the high-affinity ligand. Flow-preconditioned cells showed increased stress fiber formation compared to nonconditioned cells although the stress fibers per cell for flow-preconditioned cells were the same on both the ligand systems employed. The results indicate that enhanced adhesion strength is due to a combination of increased focal contact area, stress fiber formation, and cell alignment. PMID:12483708
Intestinal ischemic preconditioning reduces liver ischemia reperfusion injury in rats
XUE, TONG-MIN; TAO, LI-DE; ZHANG, JIE; ZHANG, PEI-JIAN; LIU, XIA; CHEN, GUO-FENG; ZHU, YI-JIA
2016-01-01
The aim of the current study was to investigate whether intestinal ischemic preconditioning (IP) reduces damage to the liver during hepatic ischemia reperfusion (IR). Sprague Dawley rats were used to model liver IR injury, and were divided into the sham operation group (SO), IR group and IP group. The results indicated that IR significantly increased Bax, caspase 3 and NF-κBp65 expression levels, with reduced expression of Bcl-2 compared with the IP group. Compared with the IR group, the levels of AST, ALT, MPO, MDA, TNF-α and IL-1 were significantly reduced in the IP group. Immunohistochemistry for Bcl-2 and Bax indicated that Bcl-2 expression in the IP group was significantly increased compared with the IR group. In addition, IP reduced Bax expression compared with the IR group. The average liver injury was worsened in the IR group and improved in the IP group, as indicated by the morphological evaluation of liver tissues. The present study suggested that IP may alleviates apoptosis, reduce the release of pro-inflammatory cytokines, ameloriate reductions in liver function and reduce liver tissue injury. To conclude, IP provided protection against hepatic IR injury. PMID:26821057
Effects of hypoxic preconditioning on synaptic ultrastructure in mice.
Liu, Yi; Sun, Zhishan; Sun, Shufeng; Duan, Yunxia; Shi, Jingfei; Qi, Zhifeng; Meng, Ran; Sun, Yongxin; Zeng, Xianwei; Chui, Dehua; Ji, Xunming
2015-01-01
Hypoxic preconditioning (HPC) elicits resistance to more drastic subsequent insults, which potentially provide neuroprotective therapeutic strategy, but the underlying mechanisms remain to be fully elucidated. Here, we examined the effects of HPC on synaptic ultrastructure in olfactory bulb of mice. Mice underwent up to five cycles of repeated HPC treatments, and hypoxic tolerance was assessed with a standard gasp reflex assay. As expected, HPC induced an increase in tolerance time. To assess synaptic responses, Western blots were used to quantify protein levels of representative markers for glia, neuron, and synapse, and transmission electron microscopy was used to examine synaptic ultrastructure and mitochondrial density. HPC did not significantly alter the protein levels of astroglial marker (GFAP), neuron-specific markers (GAP43, Tuj-1, and OMP), synaptic number markers (synaptophysin and SNAP25) or the percentage of excitatory synapses versus inhibitory synapses. However, HPC significantly affected synaptic curvature and the percentage of synapses with presynaptic mitochondria, which showed concomitant change pattern. These findings demonstrate that HPC is associated with changes in synaptic ultrastructure. PMID:25155519
Kochen-Specker Theorem as a Precondition for Quantum Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao
2016-08-01
We study the relation between the Kochen-Specker theorem (the KS theorem) and quantum computing. The KS theorem rules out a realistic theory of the KS type. We consider the realistic theory of the KS type that the results of measurements are either +1 or -1. We discuss an inconsistency between the realistic theory of the KS type and the controllability of quantum computing. We have to give up the controllability if we accept the realistic theory of the KS type. We discuss an inconsistency between the realistic theory of the KS type and the observability of quantum computing. We discuss the inconsistency by using the double-slit experiment as the most basic experiment in quantum mechanics. This experiment can be for an easy detector to a Pauli observable. We cannot accept the realistic theory of the KS type to simulate the double-slit experiment in a significant specific case. The realistic theory of the KS type can not depicture quantum detector. In short, we have to give up both the observability and the controllability if we accept the realistic theory of the KS type. Therefore, the KS theorem is a precondition for quantum computing, i.e., the realistic theory of the KS type should be ruled out.
Protective effects of remote ischemic preconditioning in isolated rat hearts
Teng, Xiao; Yuan, Xin; Tang, Yue; Shi, Jingqian
2015-01-01
To use Langendorff model to investigate whether remote ischemic preconditioning (RIPC) attenuates post-ischemic mechanical dysfunction on isolated rat heart and to explore possible mechanisms. SD rats were randomly divided into RIPC group, RIPC + norepinephrine (NE) depletion group, RIPC + pertussis toxin (PTX) pretreatment group, ischemia/reperfusion group without treatment (ischemia group) and time control (TC) group. RIPC was achieved through interrupted occlusion of anterior mesenteric artery. Then, Langendorff model was established using routine methods. Heart function was tested; immunohistochemistry and ELISA methods were used to detect various indices related to myocardial injury. Compared with ischemia group in which the hemodynamic parameters deteriorated significantly, heart function recovered to a certain degree among the RIPC, RIPC + NE depletion, and RIPC + PTX groups (P<0.05). More apoptotic nuclei were observed in ischemia group than in the other three groups (P<0.05); more apoptotic nuclei were detected in NE depletion and PTX groups than in RIPC group (P<0.05). While, there was no significant difference between NE depletion and PTX groups. In conclusion, RIPC protection on I/R myocardium extends to the period after hearts are isolated. NE and PTX-sensitive inhibitory G protein might have a role in the protection process. PMID:26550168
Hayakawa, Kentaro; Okazaki, Rentaro; Morioka, Kazuhito; Nakamura, Kozo; Tanaka, Sakae; Ogata, Toru
2014-12-01
The inflammatory response following spinal cord injury (SCI) has both harmful and beneficial effects; however, it can be modulated for therapeutic benefit. Endotoxin/lipopolysaccharide (LPS) preconditioning, a well-established method for modifying the immune reaction, has been shown to attenuate damage induced by stroke and brain trauma in rodent models. Although such effects likely are conveyed by tissue-repairing functions of the inflammatory response, the mechanisms that control the effects have not yet been elucidated. The present study preconditioned C57BL6/J mice with 0.05 mg/kg of LPS 48 hr before inducing contusion SCI to investigate the effect of LPS preconditioning on the activation of macrophages/microglia. We found that LPS preconditioning promotes the polarization of M1/M2 macrophages/microglia toward an M2 phenotype in the injured spinal cord on quantitative real-time polymerase chain reaction, enzyme-linked immunosorbent assay, and immunohistochemical analyses. Flow cytometric analyses reveal that LPS preconditioning facilitates M2 activation in resident microglia but not in infiltrating macrophages. Augmented M2 activation was accompanied by vascularization around the injured lesion, resulting in improvement in both tissue reorganization and functional recovery. Furthermore, we found that M2 activation induced by LPS preconditioning is regulated by interleukin-10 gene expression, which was preceded by the transcriptional activation of interferon regulatory factor (IRF)-3, as demonstrated by Western blotting and an IRF-3 binding assay. Altogether, our findings demonstrate that LPS preconditioning has a therapeutic effect on SCI through the modulation of M1/M2 polarization of resident microglia. The present study suggests that controlling M1/M2 polarization through endotoxin signal transduction could become a promising therapeutic strategy for various central nervous system diseases. © 2014 Wiley Periodicals, Inc.
Calik, Michael W.; Shankarappa, Sahadev A.; Langert, Kelly A.; Stubbs, Evan B.
2015-01-01
A short-term exposure to moderately intense physical exercise affords a novel measure of protection against autoimmune-mediated peripheral nerve injury. Here, we investigated the mechanism by which forced exercise attenuates the development and progression of experimental autoimmune neuritis (EAN), an established animal model of Guillain–Barré syndrome. Adult male Lewis rats remained sedentary (control) or were preconditioned with forced exercise (1.2 km/day × 3 weeks) prior to P2-antigen induction of EAN. Sedentary rats developed a monophasic course of EAN beginning on postimmunization day 12.3 ± 0.2 and reaching peak severity on day 17.0 ± 0.3 (N = 12). By comparison, forced-exercise preconditioned rats exhibited a similar monophasic course but with significant (p < .05) reduction of disease severity. Analysis of popliteal lymph nodes revealed a protective effect of exercise preconditioning on leukocyte composition and egress. Compared with sedentary controls, forced exercise preconditioning promoted a sustained twofold retention of P2-antigen responsive leukocytes. The percentage distribution of pro-inflammatory (Th1) lymphocytes retained in the nodes from sedentary EAN rats (5.1 ± 0.9%) was significantly greater than that present in nodes from forced-exercise preconditioned EAN rats (2.9 ± 0.6%) or from adjuvant controls (2.0 ± 0.3%). In contrast, the percentage of anti-inflammatory (Th2) lymphocytes (7–10%) and that of cytotoxic T lymphocytes (∼20%) remained unaltered by forced exercise preconditioning. These data do not support an exercise-inducible shift in Th1:Th2 cell bias. Rather, preconditioning with forced exercise elicits a sustained attenuation of EAN severity, in part, by altering the composition and egress of autoreactive proinflammatory (Th1) lymphocytes from draining lymph nodes. PMID:26186926
Meclizine Preconditioning Protects the Kidney Against Ischemia-Reperfusion Injury.
Kishi, Seiji; Campanholle, Gabriela; Gohil, Vishal M; Perocchi, Fabiana; Brooks, Craig R; Morizane, Ryuji; Sabbisetti, Venkata; Ichimura, Takaharu; Mootha, Vamsi K; Bonventre, Joseph V
2015-09-01
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a 'nutrient-sensitized' chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo. PMID:26501107
Glaciations in response to climate variations preconditioned by evolving topography.
Pedersen, Vivi Kathrine; Egholm, David Lundbek
2013-01-10
Landscapes modified by glacial erosion show a distinct distribution of surface area with elevation (hypsometry). In particular, the height of these regions is influenced by climatic gradients controlling the altitude where glacial and periglacial processes are the most active, and as a result, surface area is focused just below the snowline altitude. Yet the effect of this distinct glacial hypsometric signature on glacial extent and therefore on continued glacial erosion has not previously been examined. Here we show how this topographic configuration influences the climatic sensitivity of Alpine glaciers, and how the development of a glacial hypsometric distribution influences the intensity of glaciations on timescales of more than a few glacial cycles. We find that the relationship between variations in climate and the resulting variation in areal extent of glaciation changes drastically with the degree of glacial modification in the landscape. First, in landscapes with novel glaciations, a nearly linear relationship between climate and glacial area exists. Second, in previously glaciated landscapes with extensive area at a similar elevation, highly nonlinear and rapid glacial expansions occur with minimal climate forcing, once the snowline reaches the hypsometric maximum. Our results also show that erosion associated with glaciations before the mid-Pleistocene transition at around 950,000 years ago probably preconditioned the landscape--producing glacial landforms and hypsometric maxima--such that ongoing cooling led to a significant change in glacial extent and erosion, resulting in more extensive glaciations and valley deepening in the late Pleistocene epoch. We thus provide a mechanism that explains previous observations from exposure dating and low-temperature thermochronology in the European Alps, and suggest that there is a strong topographic control on the most recent Quaternary period glaciations.
Effects of ischemic preconditioning on short-duration cycling performance.
Cruz, Rogério Santos de Oliveira; de Aguiar, Rafael Alves; Turnes, Tiago; Salvador, Amadeo Félix; Caputo, Fabrizio
2016-08-01
It has been demonstrated that ischemic preconditioning (IPC) improves endurance performance. However, the potential benefits during anaerobic events and the mechanism(s) underlying these benefits remain unclear. Fifteen recreational cyclists were assessed to evaluate the effects of IPC of the upper thighs on anaerobic performance, skeletal muscle activation, and metabolic responses during a 60-s sprint performance. After an incremental test and a familiarization visit, subjects were randomly submitted in visits 3 and 4 to a performance protocol preceded by intermittent bilateral cuff inflation (4 × (5 min of blood flow restriction + 5 min reperfusion)) at either 220 mm Hg (IPC) or 20 mm Hg (control). To increase data reliability, each intervention was replicated, which was also in a random manner. In addition to the mean power output, the pulmonary oxygen uptake, blood lactate kinetics, and quadriceps electromyograms (EMGs) were analyzed during performance and throughout 45 min of passive recovery. After IPC, performance was improved by 2.1% compared with control (95% confidence intervals of 0.8% to 3.3%, P = 0.001), followed by increases in (i) the accumulated oxygen deficit, (ii) the amplitude of blood lactate kinetics, (iii) the total amount of oxygen consumed during recovery, and (iv) the overall EMG amplitude (P < 0.05). In addition, the ratio between EMG and power output was higher during the final third of performance after IPC (P < 0.05). These results suggest an increased skeletal muscle activation and a higher anaerobic contribution as the ultimate responses of IPC on short-term exercise performance.
Analysis and modeling of neural processes underlying sensory preconditioning.
Matsumoto, Yukihisa; Hirashima, Daisuke; Mizunami, Makoto
2013-03-01
Sensory preconditioning (SPC) is a procedure to demonstrate learning to associate between relatively neutral sensory stimuli in the absence of an external reinforcing stimulus, the underlying neural mechanisms of which have remained obscure. We address basic questions about neural processes underlying SPC, including whether neurons that mediate reward or punishment signals in reinforcement learning participate in association between neutral sensory stimuli. In crickets, we have suggested that octopaminergic (OA-ergic) or dopaminergic (DA-ergic) neurons participate in memory acquisition and retrieval in appetitive or aversive conditioning, respectively. Crickets that had been trained to associate an odor (CS2) with a visual pattern (CS1) (phase 1) and then to associate CS1 with water reward or quinine punishment (phase 2) exhibited a significantly increased or decreased preference for CS2 that had never been paired with the US, demonstrating successful SPC. Injection of an OA or DA receptor antagonist at different phases of the SPC training and testing showed that OA-ergic or DA-ergic neurons do not participate in learning of CS2-CS1 association in phase 1, but that OA-ergic neurons participate in learning in phase 2 and memory retrieval after appetitive SPC training. We also obtained evidence suggesting that association between CS2 and US, which should underlie conditioned response of crickets to CS2, is formed in phase 2, contrary to the standard theory of SPC assuming that it occurs in the final test. We propose models of SPC to account for these findings, by extending our model of classical conditioning.
Analysis and modeling of neural processes underlying sensory preconditioning.
Matsumoto, Yukihisa; Hirashima, Daisuke; Mizunami, Makoto
2013-03-01
Sensory preconditioning (SPC) is a procedure to demonstrate learning to associate between relatively neutral sensory stimuli in the absence of an external reinforcing stimulus, the underlying neural mechanisms of which have remained obscure. We address basic questions about neural processes underlying SPC, including whether neurons that mediate reward or punishment signals in reinforcement learning participate in association between neutral sensory stimuli. In crickets, we have suggested that octopaminergic (OA-ergic) or dopaminergic (DA-ergic) neurons participate in memory acquisition and retrieval in appetitive or aversive conditioning, respectively. Crickets that had been trained to associate an odor (CS2) with a visual pattern (CS1) (phase 1) and then to associate CS1 with water reward or quinine punishment (phase 2) exhibited a significantly increased or decreased preference for CS2 that had never been paired with the US, demonstrating successful SPC. Injection of an OA or DA receptor antagonist at different phases of the SPC training and testing showed that OA-ergic or DA-ergic neurons do not participate in learning of CS2-CS1 association in phase 1, but that OA-ergic neurons participate in learning in phase 2 and memory retrieval after appetitive SPC training. We also obtained evidence suggesting that association between CS2 and US, which should underlie conditioned response of crickets to CS2, is formed in phase 2, contrary to the standard theory of SPC assuming that it occurs in the final test. We propose models of SPC to account for these findings, by extending our model of classical conditioning. PMID:23380289
Remote ischemic preconditioning in hemodialysis: a pilot study.
Park, Jongha; Ann, Soe Hee; Chung, Hyun Chul; Lee, Jong Soo; Kim, Shin-Jae; Garg, Scot; Shin, Eun-Seok
2014-01-01
Hemodialysis (HD)-induced myocardial ischemia is associated with an elevated cardiac troponin T, and is common in asymptomatic patients undergoing conventional HD. Remote ischemic preconditioning (RIPC) has a protective effect against myocardial ischemia-reperfusion injury. We hypothesized that RIPC also has a protective effect on HD-induced myocardial injury. Chronic HD patients were randomized to the control group or the RIPC group. RIPC was induced by transient occlusion of blood flow to the arm with a blood-pressure cuff for 5 min, followed by 5 min of deflation. Three cycles of inflation and deflation were undertaken before every HD session for 1 month (total 12 times). The primary outcome was the change in cardiac troponin T (cTnT) level at day 28 from baseline. Demographic and baseline laboratory values were not different between the control (n = 17) and the RIPC groups (n = 17). cTnT levels tended to decrease from day 2 in the RIPC group through to 28 days, in contrast to no change in the control group. There were significant differences in the change of cTnT level at day 28 from baseline [Control, median; -0.002 ng/ml (interquartile range -0.008 to 0.018) versus RIPC, median; -0.015 ng/ml (interquartile range -0.055 to 0.004), P = 0.012]. RIPC reduced cTnT release in chronic conventional HD patients, suggesting that this simple, cheap, safe, and well-tolerated procedure has a protective effect against HD-induced ischemia.
Meclizine Preconditioning Protects the Kidney Against Ischemia–Reperfusion Injury
Kishi, Seiji; Campanholle, Gabriela; Gohil, Vishal M.; Perocchi, Fabiana; Brooks, Craig R.; Morizane, Ryuji; Sabbisetti, Venkata; Ichimura, Takaharu; Mootha, Vamsi K.; Bonventre, Joseph V.
2015-01-01
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a ‘nutrient-sensitized’ chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo. PMID:26501107
Meclizine Preconditioning Protects the Kidney Against Ischemia-Reperfusion Injury.
Kishi, Seiji; Campanholle, Gabriela; Gohil, Vishal M; Perocchi, Fabiana; Brooks, Craig R; Morizane, Ryuji; Sabbisetti, Venkata; Ichimura, Takaharu; Mootha, Vamsi K; Bonventre, Joseph V
2015-09-01
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a 'nutrient-sensitized' chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo.
Glaciations in response to climate variations preconditioned by evolving topography.
Pedersen, Vivi Kathrine; Egholm, David Lundbek
2013-01-10
Landscapes modified by glacial erosion show a distinct distribution of surface area with elevation (hypsometry). In particular, the height of these regions is influenced by climatic gradients controlling the altitude where glacial and periglacial processes are the most active, and as a result, surface area is focused just below the snowline altitude. Yet the effect of this distinct glacial hypsometric signature on glacial extent and therefore on continued glacial erosion has not previously been examined. Here we show how this topographic configuration influences the climatic sensitivity of Alpine glaciers, and how the development of a glacial hypsometric distribution influences the intensity of glaciations on timescales of more than a few glacial cycles. We find that the relationship between variations in climate and the resulting variation in areal extent of glaciation changes drastically with the degree of glacial modification in the landscape. First, in landscapes with novel glaciations, a nearly linear relationship between climate and glacial area exists. Second, in previously glaciated landscapes with extensive area at a similar elevation, highly nonlinear and rapid glacial expansions occur with minimal climate forcing, once the snowline reaches the hypsometric maximum. Our results also show that erosion associated with glaciations before the mid-Pleistocene transition at around 950,000 years ago probably preconditioned the landscape--producing glacial landforms and hypsometric maxima--such that ongoing cooling led to a significant change in glacial extent and erosion, resulting in more extensive glaciations and valley deepening in the late Pleistocene epoch. We thus provide a mechanism that explains previous observations from exposure dating and low-temperature thermochronology in the European Alps, and suggest that there is a strong topographic control on the most recent Quaternary period glaciations. PMID:23302860
Human amniotic fluid stem cell preconditioning improves their regenerative potential.
Rota, Cinzia; Imberti, Barbara; Pozzobon, Michela; Piccoli, Martina; De Coppi, Paolo; Atala, Anthony; Gagliardini, Elena; Xinaris, Christodoulos; Benedetti, Valentina; Fabricio, Aline S C; Squarcina, Elisa; Abbate, Mauro; Benigni, Ariela; Remuzzi, Giuseppe; Morigi, Marina
2012-07-20
Human amniotic fluid stem (hAFS) cells, a novel class of broadly multipotent stem cells that share characteristics of both embryonic and adult stem cells, have been regarded as promising candidate for cell therapy. Taking advantage by the well-established murine model of acute kidney injury (AKI), we studied the proregenerative effect of hAFS cells in immunodeficient mice injected with the nephrotoxic drug cisplatin. Infusion of hAFS cells in cisplatin mice improved renal function and limited tubular damage, although not to control level, and prolonged animal survival. Human AFS cells engrafted injured kidney predominantly in peritubular region without acquiring tubular epithelial markers. Human AFS cells exerted antiapoptotic effect, activated Akt, and stimulated proliferation of tubular cells possibly via local release of factors, including interleukin-6, vascular endothelial growth factor, and stromal cell-derived factor-1, which we documented in vitro to be produced by hAFS cells. The therapeutic potential of hAFS cells was enhanced by cell pretreatment with glial cell line-derived neurotrophic factor (GDNF), which markedly ameliorated renal function and tubular injury by increasing stem cell homing to the tubulointerstitial compartment. By in vitro studies, GDNF increased hAFS cell production of growth factors, motility, and expression of receptors involved in cell homing and survival. These findings indicate that hAFS cells can promote functional recovery and contribute to renal regeneration in AKI mice via local production of mitogenic and prosurvival factors. The effects of hAFS cells can be remarkably enhanced by GDNF preconditioning.
Human Amniotic Fluid Stem Cell Preconditioning Improves Their Regenerative Potential
Rota, Cinzia; Imberti, Barbara; Pozzobon, Michela; Piccoli, Martina; De Coppi, Paolo; Atala, Anthony; Gagliardini, Elena; Xinaris, Christodoulos; Benedetti, Valentina; Fabricio, Aline S.C.; Squarcina, Elisa; Abbate, Mauro; Benigni, Ariela; Remuzzi, Giuseppe
2012-01-01
Human amniotic fluid stem (hAFS) cells, a novel class of broadly multipotent stem cells that share characteristics of both embryonic and adult stem cells, have been regarded as promising candidate for cell therapy. Taking advantage by the well-established murine model of acute kidney injury (AKI), we studied the proregenerative effect of hAFS cells in immunodeficient mice injected with the nephrotoxic drug cisplatin. Infusion of hAFS cells in cisplatin mice improved renal function and limited tubular damage, although not to control level, and prolonged animal survival. Human AFS cells engrafted injured kidney predominantly in peritubular region without acquiring tubular epithelial markers. Human AFS cells exerted antiapoptotic effect, activated Akt, and stimulated proliferation of tubular cells possibly via local release of factors, including interleukin-6, vascular endothelial growth factor, and stromal cell–derived factor-1, which we documented in vitro to be produced by hAFS cells. The therapeutic potential of hAFS cells was enhanced by cell pretreatment with glial cell line–derived neurotrophic factor (GDNF), which markedly ameliorated renal function and tubular injury by increasing stem cell homing to the tubulointerstitial compartment. By in vitro studies, GDNF increased hAFS cell production of growth factors, motility, and expression of receptors involved in cell homing and survival. These findings indicate that hAFS cells can promote functional recovery and contribute to renal regeneration in AKI mice via local production of mitogenic and prosurvival factors. The effects of hAFS cells can be remarkably enhanced by GDNF preconditioning. PMID:22066606
Effects of ischemic preconditioning on short-duration cycling performance.
Cruz, Rogério Santos de Oliveira; de Aguiar, Rafael Alves; Turnes, Tiago; Salvador, Amadeo Félix; Caputo, Fabrizio
2016-08-01
It has been demonstrated that ischemic preconditioning (IPC) improves endurance performance. However, the potential benefits during anaerobic events and the mechanism(s) underlying these benefits remain unclear. Fifteen recreational cyclists were assessed to evaluate the effects of IPC of the upper thighs on anaerobic performance, skeletal muscle activation, and metabolic responses during a 60-s sprint performance. After an incremental test and a familiarization visit, subjects were randomly submitted in visits 3 and 4 to a performance protocol preceded by intermittent bilateral cuff inflation (4 × (5 min of blood flow restriction + 5 min reperfusion)) at either 220 mm Hg (IPC) or 20 mm Hg (control). To increase data reliability, each intervention was replicated, which was also in a random manner. In addition to the mean power output, the pulmonary oxygen uptake, blood lactate kinetics, and quadriceps electromyograms (EMGs) were analyzed during performance and throughout 45 min of passive recovery. After IPC, performance was improved by 2.1% compared with control (95% confidence intervals of 0.8% to 3.3%, P = 0.001), followed by increases in (i) the accumulated oxygen deficit, (ii) the amplitude of blood lactate kinetics, (iii) the total amount of oxygen consumed during recovery, and (iv) the overall EMG amplitude (P < 0.05). In addition, the ratio between EMG and power output was higher during the final third of performance after IPC (P < 0.05). These results suggest an increased skeletal muscle activation and a higher anaerobic contribution as the ultimate responses of IPC on short-term exercise performance. PMID:27404398
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Lung, Shu
2009-01-01
Modern airplane design is a multidisciplinary task which combines several disciplines such as structures, aerodynamics, flight controls, and sometimes heat transfer. Historically, analytical and experimental investigations concerning the interaction of the elastic airframe with aerodynamic and in retia loads have been conducted during the design phase to determine the existence of aeroelastic instabilities, so called flutter .With the advent and increased usage of flight control systems, there is also a likelihood of instabilities caused by the interaction of the flight control system and the aeroelastic response of the airplane, known as aeroservoelastic instabilities. An in -house code MPASES (Ref. 1), modified from PASES (Ref. 2), is a general purpose digital computer program for the analysis of the closed-loop stability problem. This program used subroutines given in the International Mathematical and Statistical Library (IMSL) (Ref. 3) to compute all of the real and/or complex conjugate pairs of eigenvalues of the Hessenberg matrix. For high fidelity configuration, these aeroelastic system matrices are large and compute all eigenvalues will be time consuming. A subspace iteration method (Ref. 4) for complex eigenvalues problems with nonsymmetric matrices has been formulated and incorporated into the modified program for aeroservoelastic stability (MPASES code). Subspace iteration method only solve for the lowest p eigenvalues and corresponding eigenvectors for aeroelastic and aeroservoelastic analysis. In general, the selection of p is ranging from 10 for wing flutter analysis to 50 for an entire aircraft flutter analysis. The application of this newly incorporated code is an experiment known as the Aerostructures Test Wing (ATW) which was designed by the National Aeronautic and Space Administration (NASA) Dryden Flight Research Center, Edwards, California to research aeroelastic instabilities. Specifically, this experiment was used to study an instability
Zhang, Qichun; Bian, Huimin; Guo, Liwei; Zhu, Huaxu
2016-01-01
Pharmacologic preconditioning is an intriguing and emerging approach adopted to prevent injury of ischemia/reperfusion. Neuroprotection is the cardinal effect of these pleiotropic actions of berberine. Here we investigated that whether berberine could acts as a preconditioning stimuli contributing to attenuate hypoxia-induced neurons death as well. Male Sprague-Dawley rats of middle cerebral artery occlusion (MCAO) and rat primary cortical neurons undergoing oxygen and glucose deprivation (OGD) were preconditioned with berberine (40 mg/kg, for 24 h in vivo, and 10(-6) mol/L, for 2 h in vitro, respectively). The neurological deficits and cerebral water contents of MCAO rats were evaluated. The autophagy and apoptosis were further determined in primary neurons in vitro. Berberine preconditioning (BP) was then shown to ameliorate the neurological deficits, decrease cerebral water content and promote neurogenesis of MCAO rats. Decreased LDH release from OGD-treated neurons was observed via BP, which was blocked by LY294002 (20 µmol/L), GSK690693 (10 µmol/L), or YC-1 (25 µmol/L). Furthermore, BP stimulated autophagy and inhibited apoptosis via modulated the autophagy-associated proteins LC 3, Beclin-1 and p62, and apoptosis-modulating proteins caspase 3, caspase 8, caspase 9, PARP and BCL-2/Bax. In conclusion, berberine acts as a stimulus of preconditioning that exhibits neuroprotection via promoting autophagy and decreasing anoxia-induced apoptosis. PMID:27158406
Gauss-Newton inspired preconditioned optimization in large deformation diffeomorphic metric mapping.
Hernandez, Monica
2014-10-21
In this work, we propose a novel preconditioned optimization method in the paradigm of Large Deformation Diffeomorphic Metric Mapping (LDDMM). The preconditioned update scheme is formulated for the non-stationary and the stationary parameterizations of diffeomorphisms, yielding three different LDDMM methods. The preconditioning matrices are inspired in the Hessian approximation used in Gauss-Newton method. The derivatives are computed using Frechet differentials. Thus, optimization is performed in a Sobolev space, in contrast to optimization in L(2) commonly used in non-rigid registration literature. The proposed LDDMM methods have been evaluated and compared with their respective implementations of gradient descent optimization. Evaluation has been performed using real and simulated images from the Non-rigid Image Registration Evaluation Project (NIREP). The experiments conducted in this work reported that our preconditioned LDDMM methods achieved a performance similar or superior to well-established-in-literature gradient descent non-stationary LDDMM in the great majority of cases. Moreover, preconditioned optimization showed a substantial reduction in the execution time with an affordable increase of the memory usage per iteration. Additional experiments reported that optimization using Frechet differentials should be preferable to optimization using L(2) differentials.
Zhang, Qichun; Bian, Huimin; Guo, Liwei; Zhu, Huaxu
2016-01-01
Pharmacologic preconditioning is an intriguing and emerging approach adopted to prevent injury of ischemia/reperfusion. Neuroprotection is the cardinal effect of these pleiotropic actions of berberine. Here we investigated that whether berberine could acts as a preconditioning stimuli contributing to attenuate hypoxia-induced neurons death as well. Male Sprague-Dawley rats of middle cerebral artery occlusion (MCAO) and rat primary cortical neurons undergoing oxygen and glucose deprivation (OGD) were preconditioned with berberine (40 mg/kg, for 24 h in vivo, and 10-6 mol/L, for 2 h in vitro, respectively). The neurological deficits and cerebral water contents of MCAO rats were evaluated. The autophagy and apoptosis were further determined in primary neurons in vitro. Berberine preconditioning (BP) was then shown to ameliorate the neurological deficits, decrease cerebral water content and promote neurogenesis of MCAO rats. Decreased LDH release from OGD-treated neurons was observed via BP, which was blocked by LY294002 (20 µmol/L), GSK690693 (10 µmol/L), or YC-1 (25 µmol/L). Furthermore, BP stimulated autophagy and inhibited apoptosis via modulated the autophagy-associated proteins LC 3, Beclin-1 and p62, and apoptosis-modulating proteins caspase 3, caspase 8, caspase 9, PARP and BCL-2/Bax. In conclusion, berberine acts as a stimulus of preconditioning that exhibits neuroprotection via promoting autophagy and decreasing anoxia-induced apoptosis. PMID:27158406
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too. PMID:12964209
NASA Astrophysics Data System (ADS)
Pagnacco, E.; de Cursi, E. Souza; Sampaio, R.
2016-07-01
This study concerns the computation of frequency responses of linear stochastic mechanical systems through a modal analysis. A new strategy, based on transposing standards deterministic deflated and subspace inverse power methods into stochastic framework, is introduced via polynomial chaos representation. Applicability and effectiveness of the proposed schemes is demonstrated through three simple application examples and one realistic application example. It is shown that null and repeated-eigenvalue situations are addressed successfully.
Ma, Zuheng; Moruzzi, Noah; Catrina, Sergiu-Bogdan; Hals, Ingrid; Oberholzer, José
2013-01-01
Objective Beta cells of pancreatic islets are susceptible to functional deficits and damage by hypoxia. Here we aimed to characterize such effects and to test for and pharmacological means to alleviate a negative impact of hypoxia. Methods and Design Rat and human pancreatic islets were subjected to 5.5 h of hypoxia after which functional and viability parameters were measured subsequent to the hypoxic period and/or following a 22 h re-oxygenation period. Preconditioning with diazoxide or other agents was usually done during a 22 h period prior to hypoxia. Results Insulin contents decreased by 23% after 5.5 h of hypoxia and by 61% after a re-oxygenation period. Preconditioning with diazoxide time-dependently alleviated these hypoxia effects in rat and human islets. Hypoxia reduced proinsulin biosynthesis (3H-leucine incorporation into proinsulin) by 35%. Preconditioning counteracted this decrease by 91%. Preconditioning reduced hypoxia-induced necrosis by 40%, attenuated lowering of proteins of mitochondrial complexes I–IV and enhanced stimulation of HIF-1-alpha and phosphorylated AMPK proteins. Preconditioning by diazoxide was abolished by co-exposure to tolbutamide or elevated potassium (i.e. conditions which increase Ca2+ inflow). Preconditioning with nifedipine, a calcium channel blocker, partly reproduced effects of diazoxide. Both diazoxide and nifedipine moderately reduced basal glucose oxidation whereas glucose-induced oxygen consumption (tested with diazoxide) was unaffected. Preconditioning with diaxoxide enhanced insulin contents in transplants of rat islets to non-diabetic rats and lowered hyperglycemia vs. non-preconditioned islets in streptozotocin-diabetic rats. Preconditioning of human islet transplants lowered hyperglycemia in streptozotocin-diabetic nude mice. Conclusions 1) Prior blocking of Ca2+ inflow associates with lesser hypoxia-induced damage, 2) preconditioning affects basal mitochondrial metabolism and accelerates activation of hypoxia
Harada, Yuhei; Noda, Junpei; Yatabe, Rui; Ikezaki, Hidekazu; Toko, Kiyoshi
2016-01-01
A taste sensor that uses lipid/polymer membranes can evaluate aftertastes felt by humans using Change in membrane Potential caused by Adsorption (CPA) measurements. The sensor membrane for evaluating bitterness, which is caused by acidic bitter substances such as iso-alpha acid contained in beer, needs an immersion process in monosodium glutamate (MSG) solution, called "MSG preconditioning". However, what happens to the lipid/polymer membrane during MSG preconditioning is not clear. Therefore, we carried out three experiments to investigate the changes in the lipid/polymer membrane caused by the MSG preconditioning, i.e., measurements of the taste sensor, measurements of the amount of the bitterness substance adsorbed onto the membrane and measurements of the contact angle of the membrane surface. The CPA values increased as the preconditioning process progressed, and became stable after 3 d of preconditioning. The response potentials to the reference solution showed the same tendency of the CPA value change during the preconditioning period. The contact angle of the lipid/polymer membrane surface decreased after 7 d of MSG preconditioning; in short, the surface of the lipid/polymer membrane became hydrophilic during MSG preconditioning. The amount of adsorbed iso-alpha acid was increased until 5 d preconditioning, and then it decreased. In this study, we revealed that the CPA values increased with the progress of MSG preconditioning in spite of the decrease of the amount of iso-alpha acid adsorbed onto the lipid/polymer membrane, and it was indicated that the CPA values increase because the sensor sensitivity was improved by the MSG preconditioning. PMID:26891299
Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y
2014-05-01
This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. PMID:24559835
Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y
2014-05-01
This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach.
Xu, Y; Li, N
2014-09-01
Biological species have produced many simple but efficient rules in their complex and critical survival activities such as hunting and mating. A common feature observed in several biological motion strategies is that the predator only moves along paths in a carefully selected or iteratively refined subspace (or manifold), which might be able to explain why these motion strategies are effective. In this paper, a unified linear algebraic formulation representing such a predator-prey relationship is developed to simplify the construction and refinement process of the subspace (or manifold). Specifically, the following three motion strategies are studied and modified: motion camouflage, constant absolute target direction and local pursuit. The framework constructed based on this varying subspace concept could significantly reduce the computational cost in solving a class of nonlinear constrained optimal trajectory planning problems, particularly for the case with severe constraints. Two non-trivial examples, a ground robot and a hypersonic aircraft trajectory optimization problem, are used to show the capabilities of the algorithms in this new computational framework.
NASA Astrophysics Data System (ADS)
Ren, W. X.; Lin, Y. Q.; Fang, S. E.
2011-11-01
One of the key issues in vibration-based structural health monitoring is to extract the damage-sensitive but environment-insensitive features from sampled dynamic response measurements and to carry out the statistical analysis of these features for structural damage detection. A new damage feature is proposed in this paper by using the system matrices of the forward innovation model based on the covariance-driven stochastic subspace identification of a vibrating system. To overcome the variations of the system matrices, a non-singularity transposition matrix is introduced so that the system matrices are normalized to their standard forms. For reducing the effects of modeling errors, noise and environmental variations on measured structural responses, a statistical pattern recognition paradigm is incorporated into the proposed method. The Mahalanobis and Euclidean distance decision functions of the damage feature vector are adopted by defining a statistics-based damage index. The proposed structural damage detection method is verified against one numerical signal and two numerical beams. It is demonstrated that the proposed statistics-based damage index is sensitive to damage and shows some robustness to the noise and false estimation of the system ranks. The method is capable of locating damage of the beam structures under different types of excitations. The robustness of the proposed damage detection method to the variations in environmental temperature is further validated in a companion paper by a reinforced concrete beam tested in the laboratory and a full-scale arch bridge tested in the field.
Coherent control and entanglement in a decoherence-free subspace of two multi-level atoms
NASA Astrophysics Data System (ADS)
Kiffner, Martin; Evers, Jörg; Keitel, Christoph H.
2007-06-01
Decoherence-free subspaces (DFS) in a system of two dipole-dipole interacting multi-level atoms are investigated theoretically. The ground state of each atom is a S0 singlet state, and the excited state multiplet is a P1 triplet. Since we consider arbitrary geometrical alignments of the atoms, all Zeeman sublevels of the atomic multiplets have to be taken into account [1]. It is shown that the collective state space of the two dipole-dipole interacting four-level atoms contains a four-dimensional DFS [2]. We describe a method that allows to populate the antisymmetric states of the DFS by means of a laser field. These antisymmetric states are identified as long-lived entangled states. Further, we show that any single-qubit operation between two states of the DFS can be induced by means of a microwave field. Typical operation times of these qubit rotations can be significantly shorter than for a nuclear spin system. [1] M. Kiffner, J. Evers, and C. H. Keitel, arXiv:quant-ph/0611071. [2] M. Kiffner, J. Evers, and C. H. Keitel, Phys. Rev. A in print (arXiv:quant-ph/0611084).
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
NASA Astrophysics Data System (ADS)
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html. PMID:26701675
Data processing in subspace identification and modal parameter identification of an arch bridge
NASA Astrophysics Data System (ADS)
Fan, Jiangling; Zhang, Zhiyi; Hua, Hongxing
2007-05-01
A data-processing method concerning subspace identification is presented to improve the identification of modal parameters from measured response data only. The identification procedure of this method consists of two phases, first estimating frequencies and damping ratios and then extracting mode shapes. Elements of Hankel matrices are specially rearranged to enhance the identifiability of weak characteristics and the robustness to noise contamination. Furthermore, an alternative stabilisation diagram in combination with component energy index is adopted to effectively separate spurious and physical modes. On the basis of identified frequencies, mode shapes are extracted from the signals obtained by filtering measured data with a series of band-pass filters. The proposed method was tested with a concrete-filled steel tubular arch bridge, which was subjected to ambient excitation. Gabor representation was also employed to process measured signals before conducting parameter identification. Identified results show that the proposed method can give a reliable separation of spurious and physical modes as well as accurate estimates of weak modes only from response signals.
Hunter, David W.; Hibbard, Paul B.
2016-01-01
An influential theory of mammalian vision, known as the efficient coding hypothesis, holds that early stages in the visual cortex attempts to form an efficient coding of ecologically valid stimuli. Although numerous authors have successfully modelled some aspects of early vision mathematically, closer inspection has found substantial discrepancies between the predictions of some of these models and observations of neurons in the visual cortex. In particular analysis of linear-non-linear models of simple-cells using Independent Component Analysis has found a strong bias towards features on the horoptor. In order to investigate the link between the information content of binocular images, mathematical models of complex cells and physiological recordings, we applied Independent Subspace Analysis to binocular image patches in order to learn a set of complex-cell-like models. We found that these complex-cell-like models exhibited a wide range of binocular disparity-discriminability, although only a minority exhibited high binocular discrimination scores. However, in common with the linear-non-linear model case we found that feature detection was limited to the horoptor suggesting that current mathematical models are limited in their ability to explain the functionality of the visual cortex. PMID:26982184
A note on the optimality of decomposable entanglement witnesses and completely entangled subspaces
NASA Astrophysics Data System (ADS)
Augusiak, R.; Tura, J.; Lewenstein, M.
2011-05-01
Entanglement witnesses (EWs) constitute one of the most important entanglement detectors in quantum systems. Nevertheless, their complete characterization, in particular with respect to the notion of optimality, is still missing, even in the decomposable case. Here we show that for any qubit-qunit decomposable EW (DEW) W, the three statements are equivalent: (i) the set of product vectors obeying lange, f|W|e, frang = 0 spans the corresponding Hilbert space, (ii) W is optimal, and (iii) W = QΓ, with Q denoting a positive operator supported on a completely entangled subspace (CES) and Γ standing for the partial transposition. While implications (i)==>(ii) and (ii)==>(iii) are known, here we prove that (iii) implies (i). This is a consequence of a more general fact saying that product vectors orthogonal to any CES in {\\bb C}^{2}\\otimes {\\bb C}^{n} span after partial conjugation the whole space. On the other hand, already in the case of the {\\bb C}^{3}\\otimes {\\bb C}^{3} Hilbert space, there exist DEWs for which (iii) does not imply (i). Consequently, either (i) does not imply (ii) or (ii) does not imply (iii), and the above transparent characterization, obeyed by qubit-qunit DEWs, does not hold in general.
NASA Astrophysics Data System (ADS)
Cherng, An-Pan
2003-03-01
Placing vibration sensors at appropriate locations plays an important role in experimental modal analysis. It is known that maximising the determinant of Fisher information matrix (FIM) can result in an optimal configuration of sensors from a set of candidate locations. Some methods have already been proposed in the literature, such as maximising the determinant of the diagonal elements of mode shape correlation matrix, ranking the sensor contributions by Hankel singular values (HSVs), and using perturbation theory to achieve minimum variance of estimation, etc. The objectives of this work were to systematically analyse existing methods and to propose methods that either improve their performance or accelerate the searching process for modal parameter identification. The approach used in this article is based on the analytical formulation of singular value decomposition (SVD) for a candidate-blocked Hankel matrix using signal subspace correlation (SSC) techniques developed earlier by the author. The SSC accounts for factors that contribute to the estimated results, such as mode shapes, damping ratios, sampling rate and matrix size (or number of data used). With the aid of SSC, it will be shown that using information of mode shapes and that of singular values are equivalent under certain conditions. The results of this work are not only consistent with those of existing methods, but also demonstrate a more general viewpoint to the optimisation problem. Consequently, the insight of the sensor placement problem is clearly interpreted. Finally, two modified methods that inherit the merits of existing methods are proposed, and their effectiveness is demonstrated by numerical examples.
Characterizing two-timescale nonlinear dynamics using finite-time Lyapunov exponents and subspaces
NASA Astrophysics Data System (ADS)
Mease, K. D.; Topcu, U.; Aykutluğ, E.; Maggia, M.
2016-07-01
Finite-time Lyapunov exponents and subspaces are used to define and diagnose boundary-layer type, two-timescale behavior in the tangent linear dynamics and to determine the associated manifold structure in the flow of a finite-dimensional nonlinear autonomous dynamical system. Two-timescale behavior is characterized by a slow-fast splitting of the tangent bundle for a state space region. The slow-fast splitting is defined using finite-time Lyapunov exponents and vectors, guided by the asymptotic theory of partially hyperbolic sets, with important modifications for the finite-time case; for example, finite-time Lyapunov analysis relies more heavily on the Lyapunov vectors due to their relatively fast convergence compared to that of the corresponding exponents. The splitting is used to characterize and locate points approximately on normally hyperbolic center manifolds via tangency conditions for the vector field. Determining manifolds from tangent bundle structure is more generally applicable than approaches, such as the singular perturbation method, that require special normal forms or other a priori knowledge. The use, features, and accuracy of the approach are illustrated via several detailed examples.
Robust multipixel matched subspace detection with signal-dependent background power
NASA Astrophysics Data System (ADS)
Golikov, Victor; Rodriguez-Blanco, Marco; Lebedeva, Olga
2016-01-01
A modified matched subspace detector (MSD) has been recently proposed for detecting a barely discernible object in an additive Gaussian background clutter using a single pixel in a sequence of digital images. In contrast to this detector designed for the subpixel object, we developed a generalized likelihood ratio approach to the detection of a multipixel object of unknown shape, size, and position in an additive signal-dependent Gaussian background and noise. The proposed detector modifies the MSD by adding the additional term proportional to the square of the difference between the background variances under two statistical hypotheses. The performances of these detectors are evaluated for the example scenario of two multipixel floating objects on the agitated sea surface. The crucial characteristic of the proposed detector is that prior knowledge of the target size, shape, and position is not required. Computer simulation and experimental results have shown that the proposed detector outperforms the MSD, especially in the case of weak and poorly contrasted objects of unknown shape, size, and position.
Subspace Compressive GLRT Detector for MIMO Radar in the Presence of Clutter
Bolisetti, Siva Karteek; Patwary, Mohammad; Ahmed, Khawza; Soliman, Abdel-Hamid; Abdel-Maguid, Mohamed
2015-01-01
The problem of optimising the target detection performance of MIMO radar in the presence of clutter is considered. The increased false alarm rate which is a consequence of the presence of clutter returns is known to seriously degrade the target detection performance of the radar target detector, especially under low SNR conditions. In this paper, a mathematical model is proposed to optimise the target detection performance of a MIMO radar detector in the presence of clutter. The number of samples that are required to be processed by a radar target detector regulates the amount of processing burden while achieving a given detection reliability. While Subspace Compressive GLRT (SSC-GLRT) detector is known to give optimised radar target detection performance with reduced computational complexity, it however suffers a significant deterioration in target detection performance in the presence of clutter. In this paper we provide evidence that the proposed mathematical model for SSC-GLRT detector outperforms the existing detectors in the presence of clutter. The performance analysis of the existing detectors and the proposed SSC-GLRT detector for MIMO radar in the presence of clutter are provided in this paper. PMID:26495422
Subspace Compressive GLRT Detector for MIMO Radar in the Presence of Clutter.
Bolisetti, Siva Karteek; Patwary, Mohammad; Ahmed, Khawza; Soliman, Abdel-Hamid; Abdel-Maguid, Mohamed
2015-01-01
The problem of optimising the target detection performance of MIMO radar in the presence of clutter is considered. The increased false alarm rate which is a consequence of the presence of clutter returns is known to seriously degrade the target detection performance of the radar target detector, especially under low SNR conditions. In this paper, a mathematical model is proposed to optimise the target detection performance of a MIMO radar detector in the presence of clutter. The number of samples that are required to be processed by a radar target detector regulates the amount of processing burden while achieving a given detection reliability. While Subspace Compressive GLRT (SSC-GLRT) detector is known to give optimised radar target detection performance with reduced computational complexity, it however suffers a significant deterioration in target detection performance in the presence of clutter. In this paper we provide evidence that the proposed mathematical model for SSC-GLRT detector outperforms the existing detectors in the presence of clutter. The performance analysis of the existing detectors and the proposed SSC-GLRT detector for MIMO radar in the presence of clutter are provided in this paper. PMID:26495422
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Zeng, Hong; Song, Aiguo; Yan, Ruqiang; Qin, Hongyun
2013-01-01
Ocular contamination of EEG data is an important and very common problem in the diagnosis of neurobiological events. An effective approach is proposed in this paper to remove ocular artifacts from the raw EEG recording. First, it conducts the blind source separation on the raw EEG recording by the stationary subspace analysis, which can concentrate artifacts in fewer components than the representative blind source separation methods. Next, to recover the neural information that has leaked into the artifactual components, the adaptive signal decomposition technique EMD is applied to denoise the components. Finally, the artifact-only components are projected back to be subtracted from EEG signals to get the clean EEG data. The experimental results on both the artificially contaminated EEG data and publicly available real EEG data have demonstrated the effectiveness of the proposed method, in particular for the cases where limited number of electrodes are used for the recording, as well as when the artifact contaminated signal is highly non-stationary and the underlying sources cannot be assumed to be independent or uncorrelated. PMID:24189330
Removal of EOG Artifacts from EEG Recordings Using Stationary Subspace Analysis
Zeng, Hong; Song, Aiguo
2014-01-01
An effective approach is proposed in this paper to remove ocular artifacts from the raw EEG recording. The proposed approach first conducts the blind source separation on the raw EEG recording by the stationary subspace analysis (SSA) algorithm. Unlike the classic blind source separation algorithms, SSA is explicitly tailored to the understanding of distribution changes, where both the mean and the covariance matrix are taken into account. In addition, neither independency nor uncorrelation is required among the sources by SSA. Thereby, it can concentrate artifacts in fewer components than the representative blind source separation methods. Next, the components that are determined to be related to the ocular artifacts are projected back to be subtracted from EEG signals, producing the clean EEG data eventually. The experimental results on both the artificially contaminated EEG data and real EEG data have demonstrated the effectiveness of the proposed method, in particular for the cases where limited number of electrodes are used for the recording, as well as when the artifact contaminated signal is highly nonstationary and the underlying sources cannot be assumed to be independent or uncorrelated. PMID:24550696
Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han
2015-01-01
Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model. PMID:25856325
Subspace mapping of the three-dimensional spectral receptive field of macaque MT neurons.
Inagaki, Mikio; Sasaki, Kota S; Hashimoto, Hajime; Ohzawa, Izumi
2016-08-01
Neurons in the middle temporal (MT) visual area are thought to represent the velocity (direction and speed) of motion. Previous studies suggest the importance of both excitation and suppression for creating velocity representation in MT; however, details of the organization of excitation and suppression at the MT stage are not understood fully. In this article, we examine how excitatory and suppressive inputs are pooled in individual MT neurons by measuring their receptive fields in a three-dimensional (3-D) spatiotemporal frequency domain. We recorded the activity of single MT neurons from anesthetized macaque monkeys. To achieve both quality and resolution of the receptive field estimations, we applied a subspace reverse correlation technique in which a stimulus sequence of superimposed multiple drifting gratings was cross-correlated with the spiking activity of neurons. Excitatory responses tended to be organized in a manner representing a specific velocity independent of the spatial pattern of the stimuli. Conversely, suppressive responses tended to be distributed broadly over the 3-D frequency domain, supporting a hypothesis of response normalization. Despite the nonspecific distributed profile, the total summed strength of suppression was comparable to that of excitation in many MT neurons. Furthermore, suppressive responses reduced the bandwidth of velocity tuning, indicating that suppression improves the reliability of velocity representation. Our results suggest that both well-organized excitatory inputs and broad suppressive inputs contribute significantly to the invariant and reliable representation of velocity in MT. PMID:27193321
Human detection by quadratic classification on subspace of extended histogram of gradients.
Satpathy, Amit; Jiang, Xudong; Eng, How-Lung
2014-01-01
This paper proposes a quadratic classification approach on the subspace of Extended Histogram of Gradients (ExHoG) for human detection. By investigating the limitations of Histogram of Gradients (HG) and Histogram of Oriented Gradients (HOG), ExHoG is proposed as a new feature for human detection. ExHoG alleviates the problem of discrimination between a dark object against a bright background and vice versa inherent in HG. It also resolves an issue of HOG whereby gradients of opposite directions in the same cell are mapped into the same histogram bin. We reduce the dimensionality of ExHoG using Asymmetric Principal Component Analysis (APCA) for improved quadratic classification. APCA also addresses the asymmetry issue in training sets of human detection where there are much fewer human samples than non-human samples. Our proposed approach is tested on three established benchmarking data sets--INRIA, Caltech, and Daimler--using a modified Minimum Mahalanobis distance classifier. Results indicate that the proposed approach outperforms current state-of-the-art human detection methods. PMID:23708804
Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir
2011-03-15
After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion.
NASA Astrophysics Data System (ADS)
Asavaskulkiet, Krissada
2014-01-01
This paper proposes a novel face super-resolution reconstruction (hallucination) technique for YCbCr color space. The underlying idea is to learn with an error regression model and multi-linear principal component analysis (MPCA). From hallucination framework, many color face images are explained in YCbCr space. To reduce the time complexity of color face hallucination, we can be naturally described the color face imaged as tensors or multi-linear arrays. In addition, the error regression analysis is used to find the error estimation which can be obtained from the existing LR in tensor space. In learning process is from the mistakes in reconstruct face images of the training dataset by MPCA, then finding the relationship between input and error by regression analysis. In hallucinating process uses normal method by backprojection of MPCA, after that the result is corrected with the error estimation. In this contribution we show that our hallucination technique can be suitable for color face images both in RGB and YCbCr space. By using the MPCA subspace with error regression model, we can generate photorealistic color face images. Our approach is demonstrated by extensive experiments with high-quality hallucinated color faces. Comparison with existing algorithms shows the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Song, Xue-Ke; Zhang, Hao; Ai, Qing; Qiu, Jing; Deng, Fu-Guo
2016-02-01
By using transitionless quantum driving algorithm (TQDA), we present an efficient scheme for the shortcuts to the holonomic quantum computation (HQC). It works in decoherence-free subspace (DFS) and the adiabatic process can be speeded up in the shortest possible time. More interestingly, we give a physical implementation for our shortcuts to HQC with nitrogen-vacancy centers in diamonds dispersively coupled to a whispering-gallery mode microsphere cavity. It can be efficiently realized by controlling appropriately the frequencies of the external laser pulses. Also, our scheme has good scalability with more qubits. Different from previous works, we first use TQDA to realize a universal HQC in DFS, including not only two noncommuting accelerated single-qubit holonomic gates but also a accelerated two-qubit holonomic controlled-phase gate, which provides the necessary shortcuts for the complete set of gates required for universal quantum computation. Moreover, our experimentally realizable shortcuts require only two-body interactions, not four-body ones, and they work in the dispersive regime, which relax greatly the difficulty of their physical implementation in experiment. Our numerical calculations show that the present scheme is robust against decoherence with current experimental parameters.
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Maes, Kristof; Lombaert, Geert; De Roeck, Guido
2016-01-01
Identified modal characteristics are often used as a basis for the calibration and validation of dynamic structural models, for structural control, for structural health monitoring, etc. It is therefore important to know their accuracy. In this article, a method for estimating the (co)variance of modal characteristics that are identified with the stochastic subspace identification method is validated for two civil engineering structures. The first structure is a damaged prestressed concrete bridge for which acceleration and dynamic strain data were measured in 36 different setups. The second structure is a mid-rise building for which acceleration data were measured in 10 different setups. There is a good quantitative agreement between the predicted levels of uncertainty and the observed variability of the eigenfrequencies and damping ratios between the different setups. The method can therefore be used with confidence for quantifying the uncertainty of the identified modal characteristics, also when some or all of them are estimated from a single batch of vibration data. Furthermore, the method is seen to yield valuable insight in the variability of the estimation accuracy from mode to mode and from setup to setup: the more informative a setup is regarding an estimated modal characteristic, the smaller is the estimated variance.
Wide-field fluorescence molecular tomography with compressive sensing based preconditioning
Yao, Ruoyang; Pian, Qi; Intes, Xavier
2015-01-01
Wide-field optical tomography based on structured light illumination and detection strategies enables efficient tomographic imaging of large tissues at very fast acquisition speeds. However, the optical inverse problem based on such instrumental approach is still ill-conditioned. Herein, we investigate the benefit of employing compressive sensing-based preconditioning to wide-field structured illumination and detection approaches. We assess the performances of Fluorescence Molecular Tomography (FMT) when using such preconditioning methods both in silico and with experimental data. Additionally, we demonstrate that such methodology could be used to select the subset of patterns that provides optimal reconstruction performances. Lastly, we compare preconditioning data collected using a normal base that offers good experimental SNR against that directly acquired with optimal designed base. An experimental phantom study is provided to validate the proposed technique. PMID:26713202
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
Heyser, C J; Chen, W J; Miller, J; Spear, N E; Spear, L P
1990-12-01
Offspring derived from Sprague-Dawley dams that received daily subcutaneous injection of 40 mg/kg.3 cc-1 cocaine hydrochloride (C40) or saline (LC) from Gestational Days 8-20 were tested for first-order Pavlovian conditioning and sensory preconditioning at Postnatal Days 8 (P8), P12, and P21. Although C40 dams gained significantly less weight than LC dams, pup body weights did not differ between the two groups. Significant sensory preconditioning was obtained at P8 and P12 (but not at P21) in LC offspring, confirming previous reports of decline in performance in this task during ontogeny. In contrast, C40 offspring failed to exhibit sensory preconditioning at any test age. In addition, C40 pups tested at P8 did not display significant first-order conditioning. Taken together these results suggest a more general deficit in cognitive functioning rather than a delay in cognitive development in prenatally cocaine-exposed offspring.
Maslov, L N; Lishmanov, Iu B; Kolar, F; Portnichenko, A G; Podoksenov, Iu K; Khaliulin, I G; Wang, H; Pei, J M
2010-12-01
The work covers the problem of hypoxic preconditioning (HP) carried out in isolated cardiomyocytes. Papers on delayed HP in vivo are comparatively few, and only some single works are devoted to early preconditioning in vivo. It has been established that the HP limits necrosis and apoptosis of cardiomyocytes and improves contractility of the isolated heart after ischemia (hypoxia) and reperfusion (reoxygenation). It was found that adenosine was a trigger of iP in vitro. It was proved that NO* was a trigger of HP both in vitro and in vivo. It was shown that reactive oxygen species also were triggers of hypoxic preconditioning. It was shown that ERK1/2 and p38 kinase played important role in delayed HP in vitro. PMID:21473105
Preconditioning for the Navier-Stokes equations with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Walters, Robert W.; Van Leer, Bram
1993-01-01
The preconditioning procedure for generalized finite-rate chemistry and the proper preconditioning for the one-dimensional Navier-Stokes equations are presented. Eigenvalue stiffness is resolved and convergence-rate acceleration is demonstrated over the entire Mach-number range from the incompressible to the hypersonic. Specific benefits are realized at low and transonic flow speeds. The extended preconditioning matrix accounts for thermal and chemical non-equilibrium and its implementation is explained for both explicit and implicit time marching. The effect of higher-order spatial accuracy and various flux splittings is investigated. Numerical analysis reveals the possible theoretical improvements from using proconditioning at all Mach numbers. Numerical results confirm the expectations from the numerical analysis. Representative test cases include flows with previously troublesome embedded high-condition-number regions.
Analysis of a Lipid/Polymer Membrane for Bitterness Sensing with a Preconditioning Process
Yatabe, Rui; Noda, Junpei; Tahara, Yusuke; Naito, Yoshinobu; Ikezaki, Hidekazu; Toko, Kiyoshi
2015-01-01
It is possible to evaluate the taste of foods or medicines using a taste sensor. The taste sensor converts information on taste into an electrical signal using several lipid/polymer membranes. A lipid/polymer membrane for bitterness sensing can evaluate aftertaste after immersion in monosodium glutamate (MSG), which is called “preconditioning”. However, we have not yet analyzed the change in the surface structure of the membrane as a result of preconditioning. Thus, we analyzed the change in the surface by performing contact angle and surface zeta potential measurements, Fourier transform infrared spectroscopy (FTIR), X-ray photon spectroscopy (XPS) and gas cluster ion beam time-of-flight secondary ion mass spectrometry (GCIB-TOF-SIMS). After preconditioning, the concentrations of MSG and tetradodecylammonium bromide (TDAB), contained in the lipid membrane were found to be higher in the surface region than in the bulk region. The effect of preconditioning was revealed by the above analysis methods. PMID:26404301
Nitroglycerine and sodium trioxodinitrate: from the discovery to the preconditioning effect.
Pagliaro, Pasquale; Gattullo, Donatella; Penna, Claudia
2013-10-01
The history began in the 19th century with Ascanio Sobrero (1812-1888), the discoverer of glycerol trinitrate (nitroglycerine, NTG), and with Angelo Angeli (1864-1931), the discoverer of sodium trioxodinitrate (Angeli's salt). It is likely that Angeli and Sobrero never met, but their two histories will join each other more than a century later. In fact, it has been discovered that both NTG and Angeli's salt are able to induce a preconditioning effect. As NTG has a long history as an antianginal drug its newly discovered property as a preconditioning agent has also been tested in humans. Angeli's salt properties as a preconditioning and inotropic agent have only been tested in animals so far.
Wide-field fluorescence molecular tomography with compressive sensing based preconditioning.
Yao, Ruoyang; Pian, Qi; Intes, Xavier
2015-12-01
Wide-field optical tomography based on structured light illumination and detection strategies enables efficient tomographic imaging of large tissues at very fast acquisition speeds. However, the optical inverse problem based on such instrumental approach is still ill-conditioned. Herein, we investigate the benefit of employing compressive sensing-based preconditioning to wide-field structured illumination and detection approaches. We assess the performances of Fluorescence Molecular Tomography (FMT) when using such preconditioning methods both in silico and with experimental data. Additionally, we demonstrate that such methodology could be used to select the subset of patterns that provides optimal reconstruction performances. Lastly, we compare preconditioning data collected using a normal base that offers good experimental SNR against that directly acquired with optimal designed base. An experimental phantom study is provided to validate the proposed technique.
Minocycline-Preconditioned Neural Stem Cells Enhance Neuroprotection after Ischemic Stroke in Rats
Sakata, Hiroyuki; Niizuma, Kuniyasu; Yoshioka, Hideyuki; Kim, Gab Seok; Jung, Joo Eun; Katsu, Masataka; Narasimhan, Purnima; Maier, Carolina M.; Nishiyama, Yasuhiro; Chan, Pak H.
2012-01-01
Transplantation of neural stem cells (NSCs) offers a novel therapeutic strategy for stroke; however, massive grafted-cell death following transplantation, possibly due to a hostile host-brain environment, lessens the effectiveness of this approach. Here, we have investigated whether reprogramming NSCs with minocycline, a broadly-used antibiotic also known to possess cytoprotective properties, enhances survival of grafted cells and promotes neuroprotection in ischemic stroke. NSCs harvested from the subventricular zone of fetal rats were preconditioned with minocycline in vitro and transplanted into rat brains 6 h after transient middle cerebral artery occlusion. Histological and behavioral tests were examined from days 0–28 after stroke. For in vitro experiments, NSCs were subjected to oxygen-glucose deprivation and reoxygenation. Cell viability and antioxidant gene expression were analyzed. Minocycline preconditioning protected the grafted NSCs from ischemic reperfusion injury via up-regulation of Nrf2 and Nrf2-regulated antioxidant genes. Additionally, preconditioning with minocycline induced the NSCs to release paracrine factors, including brain-derived neurotrophic factor, nerve growth factor, glial cell-derived neurotrophic factor, and vascular endothelial growth factor. Moreover, transplantation of the minocycline-preconditioned NSCs significantly attenuated infarct size and improved neurological performance, compared with non-preconditioned NSCs. Minocycline-induced neuroprotection was abolished by transfecting the NSCs with Nrf2-small interfering RNA before transplantation. Thus, preconditioning with minocycline, which reprograms NSCs to tolerate oxidative stress after ischemic reperfusion injury and to express higher levels of paracrine factors through Nrf2 up-regulation, is a simple and safe approach to enhance the effectiveness of transplantation therapy in ischemic stroke. PMID:22399769
Stathopoulos, A.; Fischer, C.F.; Saad, Y.
1994-12-31
The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
Gardner, David; Woodward, Carol S.; Evans, Katherine J
2015-01-01
Efficient solution of global climate models requires effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a time step dictated by accuracy of the processes of interest rather than by stability governed by the fastest of the time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton s method is applied for these systems. Each iteration of the Newton s method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but this Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite-difference which may show a loss of accuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite-difference approximations of these matrix-vector products for climate dynamics within the spectral-element based shallow-water dynamical-core of the Community Atmosphere Model (CAM).
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2015-09-01
The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but this Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.
Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois; Holland, David; Payne, Tony; Price, Stephen; Knoll, Dana
2011-01-01
We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over the Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.
Preconditioning electromyographic data for an upper extremity model using neural networks
NASA Technical Reports Server (NTRS)
Roberson, D. J.; Fernjallah, M.; Barr, R. E.; Gonzalez, R. V.
1994-01-01
A back propagation neural network has been employed to precondition the electromyographic signal (EMG) that drives a computational model of the human upper extremity. This model is used to determine the complex relationship between EMG and muscle activation, and generates an optimal muscle activation scheme that simulates the actual activation. While the experimental and model predicted results of the ballistic muscle movement are very similar, the activation function between the start and the finish is not. This neural network preconditions the signal in an attempt to more closely model the actual activation function over the entire course of the muscle movement.
Preconditioning: can nature's shield be raised against surgical ischemic-reperfusion injury?
Perrault, L P; Menasché, P
1999-11-01
Endogenous myocardial protection refers to the natural defense mechanisms available to the heart to withstand an ischemic injury. So far, these mechanisms have been shown to encompass two phenomena most likely interrelated: ischemic preconditioning and stress protein synthesis. Ischemic preconditioning can be defined as the adaptive mechanism induced by a brief period of reversible ischemia increasing the heart's resistance to a subsequent longer period of ischemia. The therapeutic exploitation of these natural adaptive mechanisms in cardiac surgery is an appealing prospect, as preconditioning could be used before aortic cross-clamping to enhance the current methods of myocardial protection. Two major conclusions emerge from the bulk of experimental data on preconditioning: First, the adaptive phenomenon reduces infarct size after regional ischemia in animal preparations across a wide variety of species but its effects on arrhythmias and on preservation of function after global ischemia are less consistent. This is relevant to cardiac surgery where postbypass pump failure is more often due to stunning than to discrete necrosis. Second, regardless of the various components of the intracellular signaling pathway elicited by the preconditioning stimulus, it seems that the major mechanisms by which this pathway leads to a cardioprotective effect are a slowing of adenosine triphosphate depletion and a limitation of acidosis during the protracted period of ischemia. If the latter is true, then it can reasonably be predicted that these energy-sparing and acidosis-limiting effects may become redundant to those of cardioplegia. From these observations, it can be inferred that preconditioning may find an elective indication in situations where the potential for suboptimal protection increases the risk of necrosis (extensive coronary artery disease, severe left ventricular hypertrophy, long ischemic time, and beating heart operations where occlusion of the target vessels
Nakajima, Sadahiko
2005-06-01
Sensory preconditioning and the Espinet effect illustrate that animals can reason about event relations. In sensory preconditioning, a combination of positive A-B and B-C relations yields a positive A-C relation. In the Espinet effect, a combination of a negative A-B relation and a positive B-C relation yields a negative A-C relation. Using analogies of Heider's balance theory of human attitudes, we predict that nonhuman animals would also infer a positive A-C relation from negative A-B and B-C relations.
Propulsion-related flowfields using the preconditioned Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Venkateswaran, S.; Weiss, J. M.; Merkle, C. L.; Choi, Y.-H.
1992-01-01
A previous time-derivative preconditioning procedure for solving the Navier-Stokes is extended to the chemical species equations. The scheme is implemented using both the implicit ADI and the explicit Runge-Kutta algorithms. A new definition for time-step is proposed to enable grid-independent convergence. Several examples of both reacting and non-reacting propulsion-related flowfields are considered. In all cases, convergence that is superior to conventional methods is demonstrated. Accuracy is verified using the example of a backward facing step. These results demonstrate that preconditioning can enhance the capability of density-based methods over a wide range of Mach and Reynolds numbers.
Preconditioning for the Navier-Stokes equations with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.
1993-01-01
The extension of Van Leer's preconditioning procedure to generalized finite-rate chemistry is discussed. Application to viscous flow is begun with the proper preconditioning matrix for the one-dimensional Navier-Stokes equations. Eigenvalue stiffness is resolved and convergence-rate acceleration is demonstrated over the entire Mach-number range from nearly stagnant flow to hypersonic. Specific benefits are realized at the low and transonic flow speeds typical of complete propulsion-system simulations. The extended preconditioning matrix necessarily accounts for both thermal and chemical nonequilibrium. Numerical analysis reveals the possible theoretical improvements from using a preconditioner for all Mach number regimes. Numerical results confirm the expectations from the numerical analysis. Representative test cases include flows with previously troublesome embedded high-condition-number areas. Van Leer, Lee, and Roe recently developed an optimal, analytic preconditioning technique to reduce eigenvalue stiffness over the full Mach-number range. By multiplying the flux-balance residual with the preconditioning matrix, the acoustic wave speeds are scaled so that all waves propagate at the same rate, an essential property to eliminate inherent eigenvalue stiffness. This session discusses a synthesis of the thermochemical nonequilibrium flux-splitting developed by Grossman and Cinnella and the characteristic wave preconditioning of Van Leer into a powerful tool for implicitly solving two and three-dimensional flows with generalized finite-rate chemistry. For finite-rate chemistry, the state vector of unknowns is variable in length. Therefore, the preconditioning matrix extended to generalized finite-rate chemistry must accommodate a flexible system of moving waves. Fortunately, no new kind of wave appears in the system. The only existing waves are entropy and vorticity waves, which move with the fluid, and acoustic waves, which propagate in Mach number dependent
Preconditioned upwind methods to solve 3-D incompressible Navier-Stokes equations for viscous flows
NASA Technical Reports Server (NTRS)
Hsu, C.-H.; Chen, Y.-M.; Liu, C. H.
1990-01-01
A computational method for calculating low-speed viscous flowfields is developed. The method uses the implicit upwind-relaxation finite-difference algorithm with a nonsingular eigensystem to solve the preconditioned, three-dimensional, incompressible Navier-Stokes equations in curvilinear coordinates. The technique of local time stepping is incorporated to accelerate the rate of convergence to a steady-state solution. An extensive study of optimizing the preconditioned system is carried out for two viscous flow problems. Computed results are compared with analytical solutions and experimental data.
Application of Subspace Detection to the 6 November 2011 M5.6 Prague, Oklahoma Aftershock Sequence
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Benz, H.; Johnson, C. E.; Aster, R. C.; McNamara, D. E.
2015-12-01
Subspace detection is a powerful tool for the identification of small seismic events. Subspace detectors improve upon single-event matched filtering techniques by using multiple orthogonal waveform templates whose linear combinations characterize a range of observed signals from previously identified earthquakes. Subspace detectors running on multiple stations can significantly increasing the number of locatable events, lowering the catalog's magnitude of completeness and thus providing extraordinary detail on the kinematics of the aftershock process. The 6 November 2011 M5.6 earthquake near Prague, Oklahoma is the largest earthquake instrumentally recorded in Oklahoma history and the largest earthquake resultant from deep wastewater injection. A M4.8 foreshock on 5 November 2011 and the M5.6 mainshock triggered tens of thousands of detectable aftershocks along a 20 km splay of the Wilzetta Fault Zone known as the Meeker-Prague fault. In response to this unprecedented earthquake, 21 temporary seismic stations were deployed surrounding the seismic activity. We utilized a catalog of 767 previously located aftershocks to construct subspace detectors for the 21 temporary and 10 closest permanent seismic stations. Subspace detection identified more than 500,000 new arrival-time observations, which associated into more than 20,000 locatable earthquakes. The associated earthquakes were relocated using the Bayesloc multiple-event locator, resulting in ~7,000 earthquakes with hypocentral uncertainties of less than 500 m. The relocated seismicity provides unique insight into the spatio-temporal evolution of the aftershock sequence along the Wilzetta Fault Zone and its associated structures. We find that the crystalline basement and overlying sedimentary Arbuckle formation accommodate the majority of aftershocks. While we observe aftershocks along the entire 20 km length of the Meeker-Prague fault, the vast majority of earthquakes were confined to a 9 km wide by 9 km deep
Bickler, Philip E.; Fahlman, Christian S.
2012-01-01
Neurons preconditioned with non-injurious hypoxia or the anesthetic isoflurane express different genes but are equally protected against severe hypoxia/ischemia. We hypothesized that neuroprotection would be augmented when preconditioning with isoflurane and hypoxic preconditioning are combined. We also tested if preconditioning requires intracellular Ca2+ and the inositol triphosphate receptor, and if gene expression is similar in single agent and combined preconditioning. Hippocampal slice cultures prepared from 9 day-old rats were preconditioned with hypoxia (95% N2, 5% CO2 for 15 min, HPC), 1% isoflurane for 15 min (APC) or their combination (CPC) for 15 min. A day later cultures were deprived of O2 and glucose (OGD) to produce neuronal injury. Cell death was assessed 48 hr after OGD. mRNA encoding 119 signal transduction genes was quantified with cDNA micro arrays. Intracellular Ca2+ in CA1 region was measured with fura-2 during preconditioning. The cell-permeable Ca2+ buffer BAPTA-AM, the IP3 receptor antagonist Xestospongin C and RNA silencing were used to investigate preconditioning mechanisms. CPC decreased CA1, CA3 and dentate region death by 64–86% following OGD, more than HPC or APC alone (P<0.01). Gene expression following CPC was an amalgam of gene expression in HPC and APC, with simultaneous increases in growth/development and survival/apoptosis regulation genes. Intracellular Ca2+ chelation and RNA silencing of IP3 receptors prevented preconditioning neuroprotection and gene responses. We conclude that combined isoflurane-hypoxia preconditioning augments neuroprotection compared to single agents in immature rat hippocampal slice cultures. The mechanism involves genes for growth, development, apoptosis regulation and cell survival as well as IP3 receptors and intracellular Ca2+. PMID:20434434
Bickler, Philip E; Fahlman, Christian S
2010-06-22
Neurons preconditioned with non-injurious hypoxia or the anesthetic isoflurane express different genes but are equally protected against severe hypoxia/ischemia. We hypothesized that neuroprotection would be augmented when preconditioning with isoflurane and hypoxic preconditioning are combined. We also tested if preconditioning requires intracellular Ca(2+) and the inositol triphosphate receptor, and if gene expression is similar in single agent and combined preconditioning. Hippocampal slice cultures prepared from 9 day old rats were preconditioned with hypoxia (95% N(2), 5% CO(2) for 15 min, HPC), 1% isoflurane for 15 min (APC) or their combination (CPC) for 15 min. A day later cultures were deprived of O(2) and glucose (OGD) to produce neuronal injury. Cell death was assessed 48 h after OGD. mRNA encoding 119 signal transduction genes was quantified with cDNA micro arrays. Intracellular Ca(2+) in CA1 region was measured with fura-2 during preconditioning. The cell-permeable Ca(2+) buffer BAPTA-AM, the IP(3) receptor antagonist Xestospongin C and RNA silencing were used to investigate preconditioning mechanisms. CPC decreased CA1, CA3 and dentate region death by 64-86% following OGD, more than HPC or APC alone (P<0.01). Gene expression following CPC was an amalgam of gene expression in HPC and APC, with simultaneous increases in growth/development and survival/apoptosis regulation genes. Intracellular Ca(2+) chelation and RNA silencing of IP(3) receptors prevented preconditioning neuroprotection and gene responses. We conclude that combined isoflurane-hypoxia preconditioning augments neuroprotection compared to single agents in immature rat hippocampal slice cultures. The mechanism involves genes for growth, development, apoptosis regulation and cell survival as well as IP(3) receptors and intracellular Ca(2+).
Bader, Andreas Matthäus; Klose, Kristin; Bieback, Karen; Korinth, Dirk; Schneider, Maria; Seifert, Martina; Choi, Yeong-Hoon; Kurtz, Andreas; Falk, Volkmar; Stamm, Christof
2015-01-01
Hypoxic preconditioning was shown to improve the therapeutic efficacy of bone marrow-derived multipotent mesenchymal stromal cells (MSCs) upon transplantation in ischemic tissue. Given the interest in clinical applications of umbilical cord blood-derived MSCs, we developed a specific hypoxic preconditioning protocol and investigated its anti-apoptotic and pro-angiogenic effects on cord blood MSCs undergoing simulated ischemia in vitro by subjecting them to hypoxia and nutrient deprivation with or without preceding hypoxic preconditioning. Cell number, metabolic activity, surface marker expression, chromosomal stability, apoptosis (caspases-3/7 activity) and necrosis were determined, and phosphorylation, mRNA expression and protein secretion of selected apoptosis and angiogenesis-regulating factors were quantified. Then, human umbilical vein endothelial cells (HUVEC) were subjected to simulated ischemia in co-culture with hypoxically preconditioned or naïve cord blood MSCs, and HUVEC proliferation was measured. Migration, proliferation and nitric oxide production of HUVECs were determined in presence of cord blood MSC-conditioned medium. Cord blood MSCs proved least sensitive to simulated ischemia when they were preconditioned for 24 h, while their basic behavior, immunophenotype and karyotype in culture remained unchanged. Here, “post-ischemic” cell number and metabolic activity were enhanced and caspase-3/7 activity and lactate dehydrogenase release were reduced as compared to non-preconditioned cells. Phosphorylation of AKT and BAD, mRNA expression of BCL-XL, BAG1 and VEGF, and VEGF protein secretion were higher in preconditioned cells. Hypoxically preconditioned cord blood MSCs enhanced HUVEC proliferation and migration, while nitric oxide production remained unchanged. We conclude that hypoxic preconditioning protects cord blood MSCs by activation of anti-apoptotic signaling mechanisms and enhances their angiogenic potential. Hence, hypoxic preconditioning
Discriminative local subspaces in gene expression data for effective gene function prediction
Gutiérrez, Rodrigo A.; Soto, Alvaro
2012-01-01
Motivation: Massive amounts of genome-wide gene expression data have become available, motivating the development of computational approaches that leverage this information to predict gene function. Among successful approaches, supervised machine learning methods, such as Support Vector Machines (SVMs), have shown superior prediction accuracy. However, these methods lack the simple biological intuition provided by co-expression networks (CNs), limiting their practical usefulness. Results: In this work, we present Discriminative Local Subspaces (DLS), a novel method that combines supervised machine learning and co-expression techniques with the goal of systematically predict genes involved in specific biological processes of interest. Unlike traditional CNs, DLS uses the knowledge available in Gene Ontology (GO) to generate informative training sets that guide the discovery of expression signatures: expression patterns that are discriminative for genes involved in the biological process of interest. By linking genes co-expressed with these signatures, DLS is able to construct a discriminative CN that links both, known and previously uncharacterized genes, for the selected biological process. This article focuses on the algorithm behind DLS and shows its predictive power using an Arabidopsis thaliana dataset and a representative set of 101 GO terms from the Biological Process Ontology. Our results show that DLS has a superior average accuracy than both SVMs and CNs. Thus, DLS is able to provide the prediction accuracy of supervised learning methods while maintaining the intuitive understanding of CNs. Availability: A MATLAB® implementation of DLS is available at http://virtualplant.bio.puc.cl/cgi-bin/Lab/tools.cgi Contact: tfpuelma@uc.cl Supplementary Information: Supplementary data are available at http://bioinformatics.mpimp-golm.mpg.de/. PMID:22820203
Improved system for object detection and star/galaxy classification via local subspace analysis.
Liu, Zhi-Yong; Chiu, Kai-Chun; Xu, Lei
2003-01-01
The two traditional tasks of object detection and star/galaxy classification in astronomy can be automated by neural networks because the nature of the problems is that of pattern recognition. A typical existing system can be further improved by using one of the local Principal Component Analysis (PCA) models. Our analysis in the context of object detection and star/galaxy classification reveals that local PCA is not only superior to global PCA in feature extraction, but is also superior to gaussian mixture in clustering analysis. Unlike global PCA which performs PCA for the whole data set, local PCA applies PCA individually to each cluster of data. As a result, local PCA often outperforms global PCA for data of multi-modes. Moreover, since local PCA can effectively avoid the trouble of having to specify a large number of free elements of each covariance matrix of gaussian mixture, it can give a better description of local subspace structures of each cluster when applied on high dimensional data with small sample size. In this paper, the local PCA model proposed by Xu [IEEE Trans. Neural Networks 12 (2001) 822] under the general framework of Bayesian Ying Yang (BYY) normalization learning will be adopted. Endowed with the automatic model selection ability of BYY learning, the BYY normalization learning-based local PCA model can cope with those object detection and star/galaxy classification tasks with unknown model complexity. A detailed algorithm for implementation of the local PCA model will be proposed, and experimental results using both synthetic and real astronomical data will be demonstrated.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
Defining ATR solutions using affine transformations on a union of subspaces model
NASA Astrophysics Data System (ADS)
Hester, Charles F.; Risko, Kelly K. D.
2012-05-01
The ability to recognize a target in an image is an important problem for machine vision, surveillance systems, and military weapons. There are many "solutions" to an automatic target recognition (ATR) problem proposed by practitioners. Often the definition of the problem leads to multiple solutions due to the incompleteness of the definition. Solutions are also made approximate due to resource limitations. Issues concerning "best" solution and solution performance are very open issues, since problem definitions and solutions are ill-defined. Indeed from information based physical measurement theory such as found in the Minimum Description Length (MDL) the exact solution is intractable1. Generating some clarity in defining problems on restricted sets seems an appropriate approach for improving this vagueness in ATR definitions and solutions. Given that a one to one relationship between a physical system and the MDL exists, then this uniqueness allows that a solution can be defined by its description and a norm assigned to that description. Moreover, the solution can be characterized by a set of metrics that are based on the algorithmic information of the physical measurements. The MDL, however, is not a constructive theory, but solutions can be defined by concise problem descriptions. This limits the scope of the problem and we will take this approach here. The paper will start with a definition of an ATR problem followed by our proposal of a descriptive solution using a union of subspaces model of images as described below based on Lu and Do2. This solution uses the concept of informative representations3 implicitly which we review briefly. Then we will present some metrics to be used to characterize the solution(s) which we will demonstrate by a simple example. In the discussions following the example we will suggest how this fits in the context of present and future work.
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S; Brown, Emery N; Purdon, Patrick L
2014-02-15
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
40 CFR 85.2219 - Idle test with loaded preconditioning-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Idle test with loaded preconditioning-EPA 91. 85.2219 Section 85.2219 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control...
40 CFR 85.2218 - Preconditioned idle test-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty...
40 CFR 85.2219 - Idle test with loaded preconditioning-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Idle test with loaded preconditioning-EPA 91. 85.2219 Section 85.2219 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control...
40 CFR 85.2218 - Preconditioned idle test-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty...
40 CFR 85.2218 - Preconditioned idle test-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty...
Analysis of Off-Board Powered Thermal Preconditioning in Electric Drive Vehicles: Preprint
Barnitt, R. A.; Brooker, A. D.; Ramroth, L.; Rugh , J.; Smith, K. A.
2010-12-01
Following a hot or cold thermal soak, vehicle climate control systems (air conditioning or heat) are required to quickly attain a cabin temperature comfortable to the vehicle occupants. In a plug-in hybrid electric or electric vehicle (PEV) equipped with electric climate control systems, the traction battery is the sole on-board power source. Depleting the battery for immediate climate control results in reduced charge-depleting (CD) range and additional battery wear. PEV cabin and battery thermal preconditioning using off-board power supplied by the grid or a building can mitigate the impacts of climate control. This analysis shows that climate control loads can reduce CD range up to 35%. However, cabin thermal preconditioning can increase CD range up to 19% when compared to no thermal preconditioning. In addition, this analysis shows that while battery capacity loss over time is driven by ambient temperature rather than climate control loads, concurrent battery thermal preconditioning can reduce capacity loss up to 7% by reducing pack temperature in a high ambient temperature scenario.
40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...
40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...
40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...
40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...
40 CFR 1065.590 - PM sampling media (e.g., filters) preconditioning and tare weighing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false PM sampling media (e.g., filters... Specified Duty Cycles § 1065.590 PM sampling media (e.g., filters) preconditioning and tare weighing. Before an emission test, take the following steps to prepare PM sampling media (e.g., filters) and...
40 CFR 92.125 - Pre-test procedures and preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.125 Pre-test procedures and preconditioning. (a) Locomotive testing. (1) Determine engine lubricating... is in compliance with the specifications of § 92.113. (3) Install instrumentation, engine...
Brandhorst, Daniel; Brandhorst, Heide; Maataoui, Vidya; Maataoui, Adel; Johnson, Paul R V
2013-06-01
Human islet isolation is associated with adverse conditions inducing apoptosis and necrosis. The aim of the present study was to assess whether antiapoptotic preconditioning can improve in vitro and posttransplant function of isolated human islets. A dose-finding study demonstrated that 200 μmol/L of the caspase-3 inhibitor Ac-DEVD-CMK was most efficient to reduce the expression of activated caspase-3 in isolated human islets exposed to severe heat shock. Ac-DEVD-CMK-pretreated or sham-treated islets were transplanted into immunocompetent or immunodeficient diabetic mice and subjected to static glucose incubation to measure insulin and proinsulin secretion. Antiapoptotic pretreatment significantly deteriorated graft function resulting in elevated nonfasting serum glucose when compared to sham-treated islets transplanted into diabetic nude mice (p < 0.01) and into immunocompetent mice (p < 0.05). Ac-DEVD-CMK pretreatment did not significantly change basal and glucose-stimulated insulin release compared to sham-treated human islets but increased the proinsulin release at high glucose concentrations (20 mM) thus reducing the insulin-to-proinsulin ratio in preconditioned islets (p < 0.05). This study demonstrates that the caspase-3 inhibitor Ac-DEVD-CMK interferes with proinsulin conversion in preconditioned islets reducing their potency to cure diabetic mice. The mechanism behind this phenomenon is unclear so far but may be related to the ketone CMK linked to the Ac-DEVD molecule. Further studies are required to identify biocompatible caspase inhibitors suitable for islet preconditioning.
ERIC Educational Resources Information Center
Preßler, Anna-Lena; Könen, Tanja; Hasselhorn, Marcus; Krajewski, Kristin
2014-01-01
The aim of the present study was to empirically disentangle the interdependencies of the impact of nonverbal intelligence, working memory capacities, and phonological processing skills on early reading decoding and spelling within a latent variable approach. In a sample of 127 children, these cognitive preconditions were assessed before the onset…
Preconditioned iterative methods for nonselfadjoint or indefinite elliptic boundary value problems
Bramble, J.H.; Pasciak, J.E.
1984-01-01
We consider a Galerkin-Finite Element approximation to a general linear elliptic boundary value problem which may be nonselfadjoint or indefinite. We show how to precondition the equations so that the resulting systems of linear algebraic equations lead to iteration procedures whose iterative convergence rates are independent of the number of unknowns in the solution.
NASA Astrophysics Data System (ADS)
Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.
2016-08-01
Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower–upper–middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.
al Makdessi, S; Brändle, M; Ehrt, M; Sweidan, H; Jacob, R
1995-04-12
It was the aim of this study to investigate (1) whether preconditioning modifies the fatty acid (FA) composition of myocardial phospholipids (PL), (2) whether a previous modification of membrane PL composition by the administration of coconut oil or fish oil influences the preconditioning, and (3) to compare the protective effects of preconditioning to those of dietary fish oil. To this end, three groups of rats were given during 10 weeks either a standard diet, or a standard diet + 10% coconut oil, or a standard diet + 10% fish oil. The preconditioning was performed in situ in the anesthetized open-chest rats by 2 cycles of 3 min left anterior descending coronary artery occlusion and 10 min reperfusion. It was followed by a 40 min ischemia and a 60 min reperfusion. ECG was recorded and used for the continuous count of the salves of extrasystoles, ventricular flutter and fibrillation. These rhythm disturbances were subsequently added and evaluated as total arrhythmias. The FA of tissue PL were analyzed in a sample of the ischemic zone the size of which was determined by means of malachite green. Coconut oil diet (rich in saturated FA) modified slightly the myocardial PL by increasing oleic acid and decreasing linoleic acid and resulted in the highest incidence of arrhythmias. Fish oil diet had the opposite effect in modifying drastically the PLFA (replacement of the n-6 FA by the n-3 FA) and minimizing significantly the arrhythmias in comparison with the standard diet group.(ABSTRACT TRUNCATED AT 250 WORDS)