Preconditioned Krylov subspace methods for eigenvalue problems
Wu, Kesheng; Saad, Y.; Stathopoulos, A.
1996-12-31
Lanczos algorithm is a commonly used method for finding a few extreme eigenvalues of symmetric matrices. It is effective if the wanted eigenvalues have large relative separations. If separations are small, several alternatives are often used, including the shift-invert Lanczos method, the preconditioned Lanczos method, and Davidson method. The shift-invert Lanczos method requires direct factorization of the matrix, which is often impractical if the matrix is large. In these cases preconditioned schemes are preferred. Many applications require solution of hundreds or thousands of eigenvalues of large sparse matrices, which pose serious challenges for both iterative eigenvalue solver and preconditioner. In this paper we will explore several preconditioned eigenvalue solvers and identify the ones suited for finding large number of eigenvalues. Methods discussed in this paper make up the core of a preconditioned eigenvalue toolkit under construction.
Preserving Symmetry in Preconditioned Krylov Subspace Methods
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Chow, E.; Saad, Y.; Yeung, M. C.
1996-01-01
We consider the problem of solving a linear system Ax = b when A is nearly symmetric and when the system is preconditioned by a symmetric positive definite matrix M. In the symmetric case, one can recover symmetry by using M-inner products in the conjugate gradient (CG) algorithm. This idea can also be used in the nonsymmetric case, and near symmetry can be preserved similarly. Like CG, the new algorithms are mathematically equivalent to split preconditioning, but do not require M to be factored. Better robustness in a specific sense can also be observed. When combined with truncated versions of iterative methods, tests show that this is more effective than the common practice of forfeiting near-symmetry altogether.
Krylov subspace methods on supercomputers
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.
Yang, Taiseung; Spilker, Robert L
2007-02-01
A study was conducted on combinations of preconditioned iterative methods with matrix reordering to solve the linear systems arising from a biphasic velocity-pressure (v-p) finite element formulation used to simulate soft hydrated tissues in the human musculoskeletal system. Krylov subspace methods were tested due to the symmetric indefiniteness of our systems, specifically the generalized minimal residual (GMRES), transpose-free quasi-minimal residual (TFQMR), and biconjugate gradient stabilized (BiCGSTAB) methods. Standard graph reordering techniques were used with incomplete LU (ILU) preconditioning. Performance of the methods was compared on the basis of convergence rate, computing time, and memory requirements. Our results indicate that performance is affected more significantly by the choice of reordering scheme than by the choice of Krylov method. Overall, BiCGSTAB with one-way dissection (OWD) reordering performed best for a test problem representative of a physiological tissue layer. The preferred methods were then used to simulate the contact of the humeral head and glenoid tissue layers in glenohumeral joint of the shoulder, using a penetration-based method to approximate contact. The distribution of pressure and stress fields within the tissues shows significant through-thickness effects and demonstrates the importance of simulating soft hydrated tissues with a biphasic model.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations
1994-06-01
consider versions derived from two smoothingstrategies: a variant of the distributed Gauss- Seidel method of Brandt and Dinar [6],and the technique based...factors of approximately 1.5 to2, than the Krylov subspace methods and the distributed Gauss- Seidel method . TheKrylov subspace methods are more widely
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Krylov-subspace acceleration of time periodic waveform relaxation
Lumsdaine, A.
1994-12-31
In this paper the author uses Krylov-subspace techniques to accelerate the convergence of waveform relaxation applied to solving systems of first order time periodic ordinary differential equations. He considers the problem in the frequency domain and presents frequency dependent waveform GMRES (FDWGMRES), a member of a new class of frequency dependent Krylov-subspace techniques. FDWGMRES exhibits many desirable properties, including finite termination independent of the number of timesteps and, for certain problems, a convergence rate which is bounded from above by the convergence rate of GMRES applied to the static matrix problem corresponding to the linear time-invariant ODE.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
NASA Astrophysics Data System (ADS)
Gatsis, John
An investigation of preconditioning techniques is presented for a Newton-Krylov algorithm that is used for the computation of steady, compressible, high Reynolds number flows about airfoils. A second-order centred-difference method is used to discretize the compressible Navier-Stokes (NS) equations that govern the fluid flow. The one-equation Spalart-Allmaras turbulence model is used. The discretized equations are solved using Newton's method and the generalized minimal residual (GMRES) Krylov subspace method is used to approximately solve the linear system. These preconditioning techniques are first applied to the solution of the discretized steady convection-diffusion equation. Various orderings, iterative block incomplete LU (BILU) preconditioning and multigrid preconditioning are explored. The baseline preconditioner is a BILU factorization of a lower-order discretization of the system matrix in the Newton linearization. An ordering based on the minimum discarded fill (MDF) ordering is developed and compared to the widely popular reverse Cuthill-McKee ordering. An evolutionary algorithm is used to investigate and enhance this ordering. For the convection-diffusion equation, the MDF-based ordering performs well and RCM is superior for the NS equations. Experiments for inviscid, laminar and turbulent cases are presented to show the effectiveness of iterative BILU preconditioning in terms of reducing the number of GMRES iterations, and hence the memory requirements of the Newton-Krylov algorithm. Multigrid preconditioning also reduces the number of GMRES iterations. The framework for the iterative BILU and BILU-smoothed multigrid preconditioning algorithms is presented in detail.
Application of Block Krylov Subspace Spectral Methods to Maxwell's Equations
NASA Astrophysics Data System (ADS)
Lambers, James V.
2009-10-01
Ever since its introduction by Kane Yee over forty years ago, the finite-difference time-domain (FDTD) method has been a widely-used technique for solving the time-dependent Maxwell's equations. This paper presents an alternative approach to these equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. These methods have previously been applied to the variable-coefficient heat equation and wave equation, and have demonstrated high-order accuracy, as well as stability characteristic of implicit time-stepping schemes, even though KSS methods are explicit. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gérard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.
Application of Block Krylov Subspace Spectral Methods to Maxwell's Equations
Lambers, James V.
2009-10-08
Ever since its introduction by Kane Yee over forty years ago, the finite-difference time-domain (FDTD) method has been a widely-used technique for solving the time-dependent Maxwell's equations. This paper presents an alternative approach to these equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. These methods have previously been applied to the variable-coefficient heat equation and wave equation, and have demonstrated high-order accuracy, as well as stability characteristic of implicit time-stepping schemes, even though KSS methods are explicit. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gerard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
Krylov subspace methods for computing hydrodynamic interactions in brownian dynamics simulations.
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-08-14
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a brownian dynamics simulation. However, the calculation of correlated brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N(2)) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the "block" version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10,000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale brownian dynamics simulations with hydrodynamic interactions.
Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers
Pernice, M.
1994-12-31
Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.
Druskin, V.; Lee, Ping; Knizhnerman, L.
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we propose specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.
A subspace preconditioning algorithm for eigenvector/eigenvalue computation
Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.
1996-12-31
We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations
NASA Technical Reports Server (NTRS)
Elman, Howard C.
1996-01-01
Discretization of the Stokes equations produces a symmetric indefinite system of linear equations. For stable discretizations, a variety of numerical methods have been proposed that have rates of convergence independent of the mesh size used in the discretization. In this paper, we compare the performance of four such methods: variants of the Uzawa, preconditioned conjugate gradient, preconditioned conjugate residual, and multigrid methods, for solving several two-dimensional model problems. The results indicate that where it is applicable, multigrid with smoothing based on incomplete factorization is more efficient than the other methods, but typically by no more than a factor of two. The conjugate residual method has the advantage of being both independent of iteration parameters and widely applicable.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Druskin, V.; Knizhnerman, L.
1994-12-31
The authors solve the Cauchy problem for an ODE system Au + {partial_derivative}u/{partial_derivative}t = 0, u{vert_bar}{sub t=0} = {var_phi}, where A is a square real nonnegative definite symmetric matrix of the order N, {var_phi} is a vector from R{sup N}. The stiffness matrix A is obtained due to semi-discretization of a parabolic equation or system with time-independent coefficients. The authors are particularly interested in large stiff 3-D problems for the scalar diffusion and vectorial Maxwell`s equations. First they consider an explicit method in which the solution on a whole time interval is projected on a Krylov subspace originated by A. Then they suggest another Krylov subspace with better approximating properties using powers of an implicit transition operator. These Krylov subspace methods generate optimal in a spectral sense polynomial approximations for the solution of the ODE, similar to CG for SLE.
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
A new Krylov-subspace method for symmetric indefinite linear systems
Freund, R.W.; Nachtigal, N.M.
1994-10-01
Many important applications involve the solution of large linear systems with symmetric, but indefinite coefficient matrices. For example, such systems arise in incompressible flow computations and as subproblems in optimization algorithms for linear and nonlinear programs. Existing Krylov-subspace iterations for symmetric indefinite systems, such as SYMMLQ and MINRES, require the use of symmetric positive definite preconditioners, which is a rather unnatural restriction when the matrix itself is highly indefinite with both many positive and many negative eigenvalues. In this note, the authors describe a new Krylov-subspace iteration for solving symmetric indefinite linear systems that can be combined with arbitrary symmetric preconditioners. The algorithm can be interpreted as a special case of the quasi-minimal residual method for general non-Hermitian linear systems, and like the latter, it produces iterates defined by a quasi-minimal residual property. The proposed method has the same work and storage requirements per iteration as SYMMLQ or MINRES, however, it usually converges in considerably fewer iterations. Results of numerical experiments are reported.
de la Torre Vega, E.; Cesar Suarez Arriaga, M.
1995-03-01
In geothermal simulation processes, MULKOM uses Integrated Finite Differences to solve the corresponding partial differential equations. This method requires to resolve efficiently big linear dispersed systems of non-symmetrical nature on each temporal iteration. The order of the system is usually greater than one thousand its solution could represent around 80% of CPU total calculation time. If the elapsed time solving this class of linear systems is reduced, the duration of numerical simulation decreases notably. When the matrix is big (N{ge}500) and with holes, it is inefficient to handle all the system`s elements, because it is perfectly figured out by its elements distinct of zero, quantity greatly minor than N{sup 2}. In this area, iteration methods introduce advantages with respect to gaussian elimination methods, because these last replenish matrices not having any special distribution of their non-zero elements and because they do not make use of the available solution estimations. The iterating methods of the Conjugated Gradient family, based on the subspaces of Krylov, possess the advantage of improving the convergence speed by means of preconditioning techniques. The creation of DIOMRES(k,m) method guarantees the continuous descent of the residual norm, without incurring in division by zero. This technique converges at most in N iterations if the system`s matrix is symmetrical, it does not employ too much memory to converge and updates immediately the approximation by using incomplete orthogonalization and adequate restarting. A preconditioned version of DIOMRES was applied to problems related to unsymmetrical systems with 1000 unknowns and less than five terms per equation. We found that this technique could reduce notably the time needful to find the solution without requiring memory increment. The coupling of this method to geothermal versions of MULKOM is in process.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Investigation of continuous-time quantum walk by using Krylov subspace-Lanczos algorithm
NASA Astrophysics Data System (ADS)
Jafarizadeh, M. A.; Sufiani, R.; Salimi, S.; Jafarizadeh, S.
2007-09-01
In papers [Jafarizadehn and Salimi, Ann. Phys. 322, 1005 (2007) and J. Phys. A: Math. Gen. 39, 13295 (2006)], the amplitudes of continuous-time quantum walk (CTQW) on graphs possessing quantum decomposition (QD graphs) have been calculated by a new method based on spectral distribution associated with their adjacency matrix. Here in this paper, it is shown that the CTQW on any arbitrary graph can be investigated by spectral analysis method, simply by using Krylov subspace-Lanczos algorithm to generate orthonormal bases of Hilbert space of quantum walk isomorphic to orthogonal polynomials. Also new type of graphs possessing generalized quantum decomposition (GQD) have been introduced, where this is achieved simply by relaxing some of the constrains imposed on QD graphs and it is shown that both in QD and GQD graphs, the unit vectors of strata are identical with the orthonormal basis produced by Lanczos algorithm. Moreover, it is shown that probability amplitude of observing the walk at a given vertex is proportional to its coefficient in the corresponding unit vector of its stratum, and it can be written in terms of the amplitude of its stratum. The capability of Lanczos-based algorithm for evaluation of CTQW on graphs (GQD or non-QD types), has been tested by calculating the probability amplitudes of quantum walk on some interesting finite (infinite) graph of GQD type and finite (infinite) path graph of non-GQD type, where the asymptotic behavior of the probability amplitudes at the limit of the large number of vertices, are in agreement with those of central limit theorem of [Phys. Rev. E 72, 026113 (2005)]. At the end, some applications of the method such as implementation of quantum search algorithms, calculating the resistance between two nodes in regular networks and applications in solid state and condensed matter physics, have been discussed, where in all of them, the Lanczos algorithm, reduces the Hilbert space to some smaller subspaces and the problem is
Krylov methods preconditioned with incompletely factored matrices on the CM-2
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Gropp, William; Mirchandaney, Ravi
1989-01-01
The performance is measured of the components of the key interative kernel of a preconditioned Krylov space interative linear system solver. In some sense, these numbers can be regarded as best case timings for these kernels. Sweeps were timed over meshes, sparse triangular solves, and inner products on a large 3-D model problem over a cube shaped domain discretized with a seven point template. The performance of the CM-2 is highly dependent on the use of very specialized programs. These programs mapped a regular problem domain onto the processor topology in a careful manner and used the optimized local NEWS communications network. The rather dramatic deterioration in performance was documented when these ideal conditions no longer apply. A synthetic workload generator was developed to produce and solve a parameterized family of increasingly irregular problems.
Chen, G.; Chacón, L.; Leibs, C.A.; Knoll, D.A.; Taitano, W.
2014-02-01
A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solve is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.
NASA Astrophysics Data System (ADS)
Singer, B. Sh.
2008-12-01
The paper presents a new code for modelling electromagnetic fields in complicated 3-D environments and provides examples of the code application. The code is based on an integral equation (IE) for the scattered electromagnetic field, presented in the form used by the Modified Iterative Dissipative Method (MIDM). This IE possesses contraction properties that allow it to be solved iteratively. As a result, for an arbitrary earth model and any source of the electromagnetic field, the sequence of approximations converges to the solution at any frequency. The system of linear equations that represents a finite-dimensional counterpart of the continuous IE is derived using a projection definition of the system matrix. According to this definition, the matrix is calculated by integrating the Green's function over the `source' and `receiver' cells of the numerical grid. Such a system preserves contraction properties of the continuous equation and can be solved using the same iterative technique. The condition number of the system matrix and, therefore, the convergence rate depends only on the physical properties of the model under consideration. In particular, these parameters remain independent of the numerical grid used for numerical simulation. Applied to the system of linear equations, the iterative perturbation approach generates a sequence of approximations, converging to the solution. The number of iterations is significantly reduced by finding the best possible approximant inside the Krylov subspace, which spans either all accumulated iterates or, if it is necessary to save the memory, only a limited number of the latest iterates. Optimization significantly reduces the number of iterates and weakens its dependence on the lateral contrast of the model. Unlike more traditional conjugate gradient approaches, the iterations are terminated when the approximate solution reaches the requested relative accuracy. The number of the required iterates, which for simple
NASA Astrophysics Data System (ADS)
Saadat, Amir; Khomami, Bamin
2014-05-01
Excluded volume and hydrodynamic interactions play a central role in macromolecular dynamics under equilibrium and non-equilibrium settings. The high computational cost of incorporating the influence of hydrodynamic interaction in meso-scale simulation of polymer dynamics has motivated much research on development of high fidelity and cost efficient techniques. Among them, the Chebyshev polynomial based techniques and the Krylov subspace methods are most promising. To this end, in this study we have developed a series of semi-implicit predictor-corrector Brownian dynamics algorithms for bead-spring chain micromechanical model of polymers that utilizes either the Chebyshev or the Krylov framework. The efficiency and fidelity of these new algorithms in equilibrium (radius of gyration and diffusivity) and non-equilibrium conditions (transient planar extensional flow) are demonstrated with particular emphasis on the new enhancements of the Chebyshev polynomial and the Krylov subspace methods. In turn, the algorithm with the highest efficiency and fidelity, namely, the Krylov subspace method, is used to simulate dilute solutions of high molecular weight polystyrene in uniaxial extensional flow. Finally, it is demonstrated that the bead-spring Brownian dynamics simulation with appropriate inclusion of excluded volume and hydrodynamic interactions can quantitatively predict the observed extensional hardening of polystyrene dilute solutions over a broad molecular weight range.
2012-03-22
n as is defined in problem (1), and our method is based on accelerating the simple subspace (or simultaneous) iteration (SSI) method via solving... problem (2) in a chosen subspace at each iteration . 2.1 Motivation and Framework Starting from an initial point X(0) ∈ Rm×k, SSI computes the next iterate ...J. Sci. Comput., 29 (2007), pp. 1854–1875. [29] X. YUAN AND J. YANG, Sparse and low-rank matrix decomposition via alternating direction methods
Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods
NASA Astrophysics Data System (ADS)
Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.
2013-04-01
In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.
NASA Astrophysics Data System (ADS)
Gillis, T.; Winckelmans, G.; Chatelain, P.
2017-10-01
We formulate the penalization problem inside a vortex particle-mesh method as a linear system. This system has to be solved at every wall boundary condition enforcement within a time step. Furthermore, because the underlying problem is a Poisson problem, the solution of this linear system is computationally expensive. For its solution, we here use a recycling iterative solver, rBiCGStab, in order to reduce the number of iterations and therefore decrease the computational cost of the penalization step. For the recycled subspace, we use the orthonormalized previous solutions as only the right hand side changes from the solution at one time to the next. This method is validated against benchmark results: the impulsively started cylinder, with validation at low Reynolds number (Re = 550) and computational savings assessments at moderate Reynolds number (Re = 9500); then on a flat plate benchmark (Re = 1000). By improving the convergence behavior, the approach greatly reduces the computational cost of iterative penalization, at a moderate cost in memory overhead.
Luanjing Guo; Chuan Lu; Hai Huang; Derek R. Gaston
2012-06-01
Systems of multicomponent reactive transport in porous media that are large, highly nonlinear, and tightly coupled due to complex nonlinear reactions and strong solution-media interactions are often described by a system of coupled nonlinear partial differential algebraic equations (PDAEs). A preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach is applied to solve the PDAEs in a fully coupled, fully implicit manner. The advantage of the JFNK method is that it avoids explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations for computational efficiency considerations. This solution approach is also enhanced by physics-based blocking preconditioning and multigrid algorithm for efficient inversion of preconditioners. Based on the solution approach, we have developed a reactive transport simulator named RAT. Numerical results are presented to demonstrate the efficiency and massive scalability of the simulator for reactive transport problems involving strong solution-mineral interactions and fast kinetics. It has been applied to study the highly nonlinearly coupled reactive transport system of a promising in situ environmental remediation that involves urea hydrolysis and calcium carbonate precipitation.
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Valocchi, A. J.; Lichtner, P. C.
2005-04-01
Modern multicomponent geochemical transport models require the use of parallel computation for carrying out three-dimensional, field-scale simulations due to extreme memory and processing demands. However, to fully exploit the advanced computational power provided by today's supercomputers, innovative parallel algorithms are needed. We demonstrate the use of Jacobian-free Newton-Krylov (JFNK) within the Newton-Raphson method to reduce memory and processing requirements on high-performance computers. We also demonstrate the use of physics-based preconditioners, which are often necessary when using JFNK since no explicit Jacobian matrix is ever formed. We apply JFNK to simulate enhanced in situ bioremediation of a NAPL source zone, which entails highly coupled geochemical and biodegradation reactions. The algorithm's performance is evaluated and compared with conventional solvers and preconditioners. We found that JFNK provided substantial saving in memory (i.e. 30-60%) on problems utilizing up to 512 processors on LANL's ASCI Q. However, the performance based on wallclock time was less advantageous, coming out on par with conventional techniques. In addition, we illustrate deficiencies in physics-based preconditioner performance for biogeochemical transport problems with components that undergo significant sorption or form a local quasi-stationary state.
HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.
Luanjing Guo; Hai Huang; Derek Gaston; Cody Permann; David Andrs; George Redden; Chuan Lu; Don Fox; Yoshiko Fujita
2013-03-01
Modeling large multicomponent reactive transport systems in porous media is particularly challenging when the governing partial differential algebraic equations (PDAEs) are highly nonlinear and tightly coupled due to complex nonlinear reactions and strong solution-media interactions. Here we present a preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach to solve the governing PDAEs in a fully coupled and fully implicit manner. A well-known advantage of the JFNK method is that it does not require explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations. Our approach further enhances the JFNK method by utilizing physics-based, block preconditioning and a multigrid algorithm for efficient inversion of the preconditioner. This preconditioning strategy accounts for self- and optionally, cross-coupling between primary variables using diagonal and off-diagonal blocks of an approximate Jacobian, respectively. Numerical results are presented demonstrating the efficiency and massive scalability of the solution strategy for reactive transport problems involving strong solution-mineral interactions and fast kinetics. We found that the physics-based, block preconditioner significantly decreases the number of linear iterations, directly reducing computational cost; and the strongly scalable algebraic multigrid algorithm for approximate inversion of the preconditioner leads to excellent parallel scaling performance.
NASA Astrophysics Data System (ADS)
Park, Hyeongkae; Nourgaliev, Robert; Knoll, Dana
2007-11-01
The Discontinuous Galerkin (DG) method for compressible fluid flows is incorporated into the Jacobian-Free Newton-Krylov (JFNK) framework. Advantages of combining the DG with the JFNK are two-fold: a) enabling robust and efficient high-order-accurate modeling of all-speed flows on unstructured grids, opening the possibility for high-fidelity simulation of nuclear-power-industry-relevant flows; and b) ability to tightly, robustly and high-order-accurately couple with other relevant physics (neutronics, thermal-structural response of solids, etc.). In the present study, we focus on the physics-based preconditioning (PBP) of the Krylov method (GMRES), used as the linear solver in our implicit higher-order-accurate Runge-Kutta (ESDIRK) time discretization scheme; exploiting the compactness of the spatial discretization of the DG family. In particular, we utilize the Implicit Continuous-fluid Eulerian (ICE) method and investigate its efficacy as the PBP within the JFNK-DG method. Using the eigenvalue analysis, it is found that the ICE collapses the complex components of the all eigenvalues of the Jacobian matrix (associated with pressure waves) onto the real axis, and thereby enabling at least an order of magnitude faster simulations in nearly-incompressible/weakly-compressible regimes with a significant storage saving.
NASA Astrophysics Data System (ADS)
Viallet, M.; Goffrey, T.; Baraffe, I.; Folini, D.; Geroux, C.; Popov, M. V.; Pratt, J.; Walder, R.
2016-02-01
This work is a continuation of our efforts to develop an efficient implicit solver for multidimensional hydrodynamics for the purpose of studying important physical processes in stellar interiors, such as turbulent convection and overshooting. We present an implicit solver that results from the combination of a Jacobian-free Newton-Krylov method and a preconditioning technique tailored to the inviscid, compressible equations of stellar hydrodynamics. We assess the accuracy and performance of the solver for both 2D and 3D problems for Mach numbers down to 10-6. Although our applications concern flows in stellar interiors, the method can be applied to general advection and/or diffusion-dominated flows. The method presented in this paper opens up new avenues in 3D modeling of realistic stellar interiors allowing the study of important problems in stellar structure and evolution.
Starke, G.
1994-12-31
For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Implicit Newton-Krylov methods for modeling blast furnace stoves
Howse, J.W.; Hansen, G.A.; Cagliostro, D.J.; Muske, K.R.
1998-03-01
In this paper the authors discuss the use of an implicit Newton-Krylov method to solve a set of partial differential equations representing a physical model of a blast furnace stove. The blast furnace stove is an integral part of the iron making process in the steel industry. These stoves are used to heat air which is then used in the blast furnace to chemically reduce iron ore to iron metal. The solution technique used to solve the discrete representations of the model and control PDE`s must be robust to linear systems with disparate eigenvalues, and must converge rapidly without using tuning parameters. The disparity in eigenvalues is created by the different time scales for convection in the gas, and conduction in the brick; combined with a difference between the scaling of the model and control PDE`s. A preconditioned implicit Newton-Krylov solution technique was employed. The procedure employs Newton`s method, where the update to the current solution at each stage is computed by solving a linear system. This linear system is obtained by linearizing the discrete approximation to the PDE`s, using a numerical approximation for the Jacobian of the discretized system. This linear system is then solved for the needed update using a preconditioned Krylov subspace projection method.
A Newton-Krylov Approach to Aerodynamic Shape Optimization in Three Dimensions
NASA Astrophysics Data System (ADS)
Leung, Timothy Man-Ming
A Newton-Krylov algorithm is presented for aerodynamic shape optimization in three dimensions using the Euler equations. An inexact-Newton method is used in the flow solver, a discrete-adjoint method to compute the gradient, and the quasi-Newton optimizer to find the optimum. A Krylov subspace method with approximate-Schur preconditioning is used to solve both the flow equation and the adjoint equation. Basis spline surfaces are used to parameterize the geometry, and a fast algebraic algorithm is used for grid movement. Accurate discrete- adjoint gradients can be obtained in approximately one-fourth the time required for a converged flow solution. Single- and multi-point lift-constrained drag minimization optimization cases are presented for wing design at transonic speeds. In all cases, the optimizer is able to efficiently decrease the objective function and gradient for problems with hundreds of design variables.
McHugh, P.R.
1995-10-01
Fully coupled, Newton-Krylov algorithms are investigated for solving strongly coupled, nonlinear systems of partial differential equations arising in the field of computational fluid dynamics. Primitive variable forms of the steady incompressible and compressible Navier-Stokes and energy equations that describe the flow of a laminar Newtonian fluid in two-dimensions are specifically considered. Numerical solutions are obtained by first integrating over discrete finite volumes that compose the computational mesh. The resulting system of nonlinear algebraic equations are linearized using Newton`s method. Preconditioned Krylov subspace based iterative algorithms then solve these linear systems on each Newton iteration. Selected Krylov algorithms include the Arnoldi-based Generalized Minimal RESidual (GMRES) algorithm, and the Lanczos-based Conjugate Gradients Squared (CGS), Bi-CGSTAB, and Transpose-Free Quasi-Minimal Residual (TFQMR) algorithms. Both Incomplete Lower-Upper (ILU) factorization and domain-based additive and multiplicative Schwarz preconditioning strategies are studied. Numerical techniques such as mesh sequencing, adaptive damping, pseudo-transient relaxation, and parameter continuation are used to improve the solution efficiency, while algorithm implementation is simplified using a numerical Jacobian evaluation. The capabilities of standard Newton-Krylov algorithms are demonstrated via solutions to both incompressible and compressible flow problems. Incompressible flow problems include natural convection in an enclosed cavity, and mixed/forced convection past a backward facing step.
Combined incomplete LU and strongly implicit procedure preconditioning
Meese, E.A.
1996-12-31
For the solution of large sparse linear systems of equations, the Krylov-subspace methods have gained great merit. Their efficiency are, however, largely dependent upon preconditioning of the equation-system. A family of matrix factorisations often used for preconditioning, is obtained from a truncated Gaussian elimination, ILU(p). Less common, supposedly due to it`s restriction to certain sparsity patterns, is factorisations generated by the strongly implicit procedure (SIP). The ideas from ILU(p) and SIP are used in this paper to construct a generalized strongly implicit procedure, applicable to matrices with any sparsity pattern. The new algorithm has been run on some test equations, and efficiency improvements over ILU(p) was found.
Krylov subspace acceleration of waveform relaxation
Lumsdaine, A.; Wu, Deyun
1996-12-31
Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).
Minimal Krylov Subspaces for Dimension Reduction
2013-01-01
Berry, Z. Drmac, and E.R. Jessup . Matrices, vector spaces, and information retrieval. SIAM review, 41(2):335–362, 1999. [8] M.W. Berry, S.T. Dumais... Information Processing Systems 14: Proceeding of the 2001 Conference, pages 849–856. [55] C.C. Paige. The computation of eigenvalues and eigenvectors of very... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Krylov subspace methods for the Dirac equation
NASA Astrophysics Data System (ADS)
Beerwerth, Randolf; Bauke, Heiko
2015-03-01
The Lanczos algorithm is evaluated for solving the time-independent as well as the time-dependent Dirac equation with arbitrary electromagnetic fields. We demonstrate that the Lanczos algorithm can yield very precise eigenenergies and allows very precise time propagation of relativistic wave packets. The unboundedness of the Dirac Hamiltonian does not hinder the applicability of the Lanczos algorithm. As the Lanczos algorithm requires only matrix-vector products and inner products, which both can be efficiently parallelized, it is an ideal method for large-scale calculations. The excellent parallelization capabilities are demonstrated by a parallel implementation of the Dirac Lanczos propagator utilizing the Message Passing Interface standard.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2015-10-01
Numerical methods for fractional differential equations generate full stiffness matrices, which were traditionally solved via Gaussian type direct solvers that require O (N3) of computational work and O (N2) of memory to store where N is the number of spatial grid points in the discretization. We develop a preconditioned fast Krylov subspace iterative method for the efficient and faithful solution of finite volume schemes defined on a locally refined composite mesh for fractional differential equations to resolve boundary layers of the solutions. Numerical results are presented to show the utility of the method.
Portable, parallel, reusable Krylov space codes
Smith, B.; Gropp, W.
1994-12-31
Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
NASA Astrophysics Data System (ADS)
Hwang, Feng-Nan; Cai, Shang-Rong; Shao, Yun-Long; Wu, Jong-Shinn
2010-09-01
We investigate fully parallel Newton-Krylov-Schwarz (NKS) algorithms for solving the large sparse nonlinear systems of equations arising from the finite element discretization of the three-dimensional Poisson-Boltzmann equation (PBE), which is often used to describe the colloidal phenomena of an electric double layer around charged objects in colloidal and interfacial science. The NKS algorithm employs an inexact Newton method with backtracking (INB) as the nonlinear solver in conjunction with a Krylov subspace method as the linear solver for the corresponding Jacobian system. An overlapping Schwarz method as a preconditioner to accelerate the convergence of the linear solver. Two test cases including two isolated charged particles and two colloidal particles in a cylindrical pore are used as benchmark problems to validate the correctness of our parallel NKS-based PBE solver. In addition, a truly three-dimensional case, which models the interaction between two charged spherical particles within a rough charged micro-capillary, is simulated to demonstrate the applicability of our PBE solver to handle a problem with complex geometry. Finally, based on the result obtained from a PC cluster of parallel machines, we show numerically that NKS is quite suitable for the numerical simulation of interaction between colloidal particles, since NKS is robust in the sense that INB is able to converge within a small number of iterations regardless of the geometry, the mesh size, the number of processors. With help of an additive preconditioned Krylov subspace method NKS achieves parallel efficiency of 71% or better on up to a hundred processors for a 3D problem with 5 million unknowns.
Preconditioned conjugate gradient methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1994-01-01
A preconditioned Krylov subspace method (GMRES) is used to solve the linear systems of equations formed at each time-integration step of the unsteady, two-dimensional, compressible Navier-Stokes equations of fluid flow. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux-split formulation. Several preconditioning techniques are investigated to enhance the efficiency and convergence rate of the implicit solver based on the GMRES algorithm. The superiority of the new solver is established by comparisons with a conventional implicit solver, namely line Gauss-Seidel relaxation (LGSR). Computational test results for low-speed (incompressible flow over a backward-facing step at Mach 0.1), transonic flow (trailing edge flow in a transonic turbine cascade), and hypersonic flow (shock-on-shock interactions on a cylindrical leading edge at Mach 6.0) are presented. For the Mach 0.1 case, overall speedup factors of up to 17 (in terms of time-steps) and 15 (in terms of CPU time on a CRAY-YMP/8) are found in favor of the preconditioned GMRES solver, when compared with the LGSR solver. The corresponding speedup factors for the transonic flow case are 17 and 23, respectively. The hypersonic flow case shows slightly lower speedup factors of 9 and 13, respectively. The study of preconditioners conducted in this research reveals that a new LUSGS-type preconditioner is much more efficient than a conventional incomplete LU-type preconditioner.
Scharz Preconditioners for Krylov Methods: Theory and Practice
Szyld, Daniel B.
2013-05-10
Several numerical methods were produced and analyzed. The main thrust of the work relates to inexact Krylov subspace methods for the solution of linear systems of equations arising from the discretization of partial di erential equa- tions. These are iterative methods, i.e., where an approximation is obtained and at each step. Usually, a matrix-vector product is needed at each iteration. In the inexact methods, this product (or the application of a preconditioner) can be done inexactly. Schwarz methods, based on domain decompositions, are excellent preconditioners for thise systems. We contributed towards their under- standing from an algebraic point of view, developed new ones, and studied their performance in the inexact setting. We also worked on combinatorial problems to help de ne the algebraic partition of the domains, with the needed overlap, as well as PDE-constraint optimization using the above-mentioned inexact Krylov subspace methods.
NASA Astrophysics Data System (ADS)
Pereira, Rajesh; Paul-Paddock, Connor
2017-06-01
We extend the notion of anticoherent spin states to anticoherent subspaces. An anticoherent subspace of order t is a subspace whose unit vectors are all anticoherent states of order at least t. We use Klein's description of algebras of polynomials which are invariant under finite subgroups of SU(2) on C2 to provide constructions of anticoherent subspaces. We discuss applications of this idea to the entanglement of n qubit symmetric states. Furthermore, we show a connection between the existence of these subspaces and the properties of the higher-rank numerical range for certain products of spin observables. We also note that these constructions give us subspaces of spin states all of whose unit vectors have Majorana representations which are spherical designs of order at least t.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
NASA Astrophysics Data System (ADS)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
A Krylov-Schwarz iterative solver for the shallow water equations
NASA Astrophysics Data System (ADS)
Goossens, Serge; Tan, Kian; Roose, Dirk
In the DELFT3 D-FLOW software time integration is done by an AOI method, in which the ordering of explicit and implicit steps at every time step leads to a system of equations for the water elevation. Until recently this system was solved by an ADI iteration process, which does not converge very well for large time steps and small mesh widths. We implemented a robust solver by using a Krylov subspace method with the ADI method acting as a preconditioner. This solver is used as the subdomain solver in a domain decomposition method, which is also accelerated by a Krylov subspace method. In this case certain vectors from the subspace, constructed during the solution process, can be reused in the solution of the subsequent linear systems and this makes the method even more efficient. The adopted domain decomposition method is an additive preconditioner so it is inherently parallel.
Lattice QCD computations: Recent progress with modern Krylov subspace methods
Frommer, A.
1996-12-31
Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.
Avoiding Communication in Two-Sided Krylov Subspace Methods
2011-08-16
Std Z39-18 Copyright © 2011, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for...TSQR, our communication-avoiding imple- mentations also require an additional kernel to compute the Gram-like matrix, Ṽ TV , where V and Ṽ are O(n...polynomials as P0(z) = 1 P1(z) = 1 2g (z − c) Pk+1(z) = 1 g [ (z − c)Pk(z)− d2 4g Pk−1(z) ] where the coecients g, c and d serve to scale and shift the
Preconditioning Newton-Krylor Methods for Variably Saturated Flow
Woodward, C.; Jones, J
2000-01-07
In this paper, we compare the effectiveness of three preconditioning strategies in simulations of variably saturated flow. Using Richards' equation as our model, we solve the nonlinear system using a Newton-Krylov method. Since Krylov solvers can stagnate, resulting in slow convergence, we investigate different strategies of preconditioning the Jacobian system. Our work uses a multigrid method to solve the preconditioning systems, with three different approximations to the Jacobian matrix. One approximation lags the nonlinearities, the second results from discarding selected off-diagonal contributions, and the third matrix considered is the full Jacobian. Results indicate that although the Jacobian is more accurate, its usage as a preconditioning matrix should be limited, as it requires much more storage than the simpler approximations. Also, simply lagging the nonlinearities gives a preconditioning matrix that is almost as effective as the full Jacobian but much easier to compute.
Preconditioning techniques for the iterative solution of scattering problems
NASA Astrophysics Data System (ADS)
Egidi, Nadaniela; Maponi, Pierluigi
2008-09-01
We consider a time-harmonic electromagnetic scattering problem for an inhomogeneous medium. Some symmetry hypotheses on the refractive index of the medium and on the electromagnetic fields allow to reduce this problem to a two-dimensional scattering problem. This boundary value problem is defined on an unbounded domain, so its numerical solution cannot be obtained by a straightforward application of usual methods, such as for example finite difference methods, and finite element methods. A possible way to overcome this difficulty is given by an equivalent integral formulation of this problem, where the scattered field can be computed from the solution of a Fredholm integral equation of second kind. The numerical approximation of this problem usually produces large dense linear systems. We consider usual iterative methods for the solution of such linear systems, and we study some preconditioning techniques to improve the efficiency of these methods. We show some numerical results obtained with two well known Krylov subspace methods, i.e., Bi-CGSTAB and GMRES.
Short Communication: A Parallel Newton-Krylov Method for Navier-Stokes Rotorcraft Codes
NASA Astrophysics Data System (ADS)
Ekici, Kivanc; Lyrintzis, Anastasios S.
2003-05-01
The application of Krylov subspace iterative methods to unsteady three-dimensional Navier-Stokes codes on massively parallel and distributed computing environments is investigated. Previously, the Euler mode of the Navier-Stokes flow solver Transonic Unsteady Rotor Navier-Stokes (TURNS) has been coupled with a Newton-Krylov scheme which uses two Conjugate-Gradient-like (CG) iterative methods. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. Parallel implicit operators are used and compared as preconditioners. Results are presented for two-dimensional and three-dimensional viscous cases. The Message Passing Interface (MPI) protocol is used, because of its portability to various parallel architectures.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Conformal mapping and convergence of Krylov iterations
Driscoll, T.A.; Trefethen, L.N.
1994-12-31
Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.
Krylov methods for compressible flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1995-01-01
We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.
Application of nonlinear Krylov acceleration to radiative transfer problems
Till, A. T.; Adams, M. L.; Morel, J. E.
2013-07-01
The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; Sawyer, William; Dongarra, Jack
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphics processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.
Improvements in Block-Krylov Ritz Vectors and the Boundary Flexibility Method of Component Synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly Scott
1997-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, proposed by Wilson, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based upon the boundary flexibility vectors of the component. Improvements have been made in the formulation of the initial seed to the Krylov sequence, through the use of block-filtering. A method to shift the Krylov sequence to create Ritz vectors that will represent the dynamic behavior of the component at target frequencies, the target frequency being determined by the applied forcing functions, has been developed. A method to terminate the Krylov sequence has also been developed. Various orthonormalization schemes have been developed and evaluated, including the Cholesky/QR method. Several auxiliary theorems and proofs which illustrate issues in component mode synthesis and loss of orthogonality in the Krylov sequence have also been presented. The resulting methodology is applicable to both fixed and free- interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. The accuracy is found to be comparable to that of component synthesis based upon normal modes, using fewer generalized coordinates. In addition, the block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem. The requirement for less vectors to form the component, coupled with the lower computational expense of calculating these Ritz vectors, combine to create a method more efficient than traditional component mode synthesis.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
Harris, D B
2006-07-11
Broadband subspace detectors are introduced for seismological applications that require the detection of repetitive sources that produce similar, yet significantly variable seismic signals. Like correlation detectors, of which they are a generalization, subspace detectors often permit remarkably sensitive detection of small events. The subspace detector derives its name from the fact that it projects a sliding window of data drawn from a continuous stream onto a vector signal subspace spanning the collection of signals expected to be generated by a particular source. Empirical procedures are presented for designing subspaces from clusters of events characterizing a source. Furthermore, a solution is presented for the problem of selecting the dimension of the subspace to maximize the probability of detecting repetitive events at a fixed false alarm rate. An example illustrates subspace design and detection using events in the 2002 San Ramon, California earthquake swarm.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Astrophysics Data System (ADS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-05-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Newton-Raphson preconditioner for Krylov type solvers on GPU devices.
Kushida, Noriyuki
2016-01-01
A new Newton-Raphson method based preconditioner for Krylov type linear equation solvers for GPGPU is developed, and the performance is investigated. Conventional preconditioners improve the convergence of Krylov type solvers, and perform well on CPUs. However, they do not perform well on GPGPUs, because of the complexity of implementing powerful preconditioners. The developed preconditioner is based on the BFGS Hessian matrix approximation technique, which is well known as a robust and fast nonlinear equation solver. Because the Hessian matrix in the BFGS represents the coefficient matrix of a system of linear equations in some sense, the approximated Hessian matrix can be a preconditioner. On the other hand, BFGS is required to store dense matrices and to invert them, which should be avoided on modern computers and supercomputers. To overcome these disadvantages, we therefore introduce a limited memory BFGS, which requires less memory space and less computational effort than the BFGS. In addition, a limited memory BFGS can be implemented with BLAS libraries, which are well optimized for target architectures. There are advantages and disadvantages to the Hessian matrix approximation becoming better as the Krylov solver iteration continues. The preconditioning matrix varies through Krylov solver iterations, and only flexible Krylov solvers can work well with the developed preconditioner. The GCR method, which is a flexible Krylov solver, is employed because of the prevalence of GCR as a Krylov solver with a variable preconditioner. As a result of the performance investigation, the new preconditioner indicates the following benefits: (1) The new preconditioner is robust; i.e., it converges while conventional preconditioners (the diagonal scaling, and the SSOR preconditioners) fail. (2) In the best case scenarios, it is over 10 times faster than conventional preconditioners on a CPU. (3) Because it requries only simple operations, it performs well on a GPGPU. In
Projection preconditioning for Lanczos-type methods
Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V.
1996-12-31
We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.
NASA Astrophysics Data System (ADS)
Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María
2014-06-01
We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
NASA Astrophysics Data System (ADS)
Hwang, Feng-Nan; Wei, Zih-Hao; Huang, Tsung-Ming; Wang, Weichung
2010-04-01
We develop a parallel Jacobi-Davidson approach for finding a partial set of eigenpairs of large sparse polynomial eigenvalue problems with application in quantum dot simulation. A Jacobi-Davidson eigenvalue solver is implemented based on the Portable, Extensible Toolkit for Scientific Computation (PETSc). The eigensolver thus inherits PETSc's efficient and various parallel operations, linear solvers, preconditioning schemes, and easy usages. The parallel eigenvalue solver is then used to solve higher degree polynomial eigenvalue problems arising in numerical simulations of three dimensional quantum dots governed by Schrödinger's equations. We find that the parallel restricted additive Schwarz preconditioner in conjunction with a parallel Krylov subspace method (e.g. GMRES) can solve the correction equations, the most costly step in the Jacobi-Davidson algorithm, very efficiently in parallel. Besides, the overall performance is quite satisfactory. We have observed near perfect superlinear speedup by using up to 320 processors. The parallel eigensolver can find all target interior eigenpairs of a quintic polynomial eigenvalue problem with more than 32 million variables within 12 minutes by using 272 Intel 3.0 GHz processors.
Subspace ensembles for classification
NASA Astrophysics Data System (ADS)
Sun, Shiliang; Zhang, Changshui
2007-11-01
Ensemble learning constitutes one of the principal current directions in machine learning and data mining. In this paper, we explore subspace ensembles for classification by manipulating different feature subspaces. Commencing with the nature of ensemble efficacy, we probe into the microcosmic meaning of ensemble diversity, and propose to use region partitioning and region weighting to implement effective subspace ensembles. Individual classifiers possessing eminent performance on a partitioned region reflected by high neighborhood accuracies are deemed to contribute largely to this region, and are assigned large weights in determining the labels of instances in this area. A robust algorithm “Sena” that incarnates the mechanism is presented, which is insensitive to the number of nearest neighbors chosen to calculate neighborhood accuracies. The algorithm exhibits improved performance over the well-known ensembles of bagging, AdaBoost and random subspace. The difference of its effectivity with varying base classifiers is also investigated.
NASA Astrophysics Data System (ADS)
Ding, Longyun; Gao, Su
2008-04-01
We show that any infinite-dimensional Banach (or more generally, Fréchet) space contains linear subspaces of arbitrarily high Borel complexity which admit separable complete norms giving rise to the inherited Borel structure.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Aliaga, José I.; Alonso, Pedro; Badía, José M.; Chacón, Pablo; Davidović, Davor; López-Blanco, José R.; Quintana-Ortí, Enrique S.
2016-03-15
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousands degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.
An Inexact Newton-Krylov Algorithm for Constrained Diffeomorphic Image Registration.
Mang, Andreas; Biros, George
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H(1)- or H(2)-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton-Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton-Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
Notes on Newton-Krylov based Incompressible Flow Projection Solver
Robert Nourgaliev; Mark Christon; J. Bakosi
2012-09-01
The purpose of the present document is to formulate Jacobian-free Newton-Krylov algorithm for approximate projection method used in Hydra-TH code. Hydra-TH is developed by Los Alamos National Laboratory (LANL) under the auspices of the Consortium for Advanced Simulation of Light-Water Reactors (CASL) for thermal-hydraulics applications ranging from grid-to-rod fretting (GTRF) to multiphase flow subcooled boiling. Currently, Hydra-TH is based on the semi-implicit projection method, which provides an excellent platform for simulation of transient single-phase thermalhydraulics problems. This algorithm however is not efficient when applied for very slow or steady-state problems, as well as for highly nonlinear multiphase problems relevant to nuclear reactor thermalhydraulics with boiling and condensation. These applications require fully-implicit tightly-coupling algorithms. The major technical contribution of the present report is the formulation of fully-implicit projection algorithm which will fulfill this purpose. This includes the definition of non-linear residuals used for GMRES-based linear iterations, as well as physics-based preconditioning techniques.
Newton-Krylov-Schwarz methods in unstructured grid Euler flow
Keyes, D.E.
1996-12-31
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton`s method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on an aerodynamic application emphasizing comparisons with a standard defect-correction approach and subdomain preconditioner consistency.
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
An Implicit Energy-Conservative 2D Fokker-Planck Algorithm. II. Jacobian-Free Newton-Krylov Solver
NASA Astrophysics Data System (ADS)
Chacón, L.; Barnes, D. C.; Knoll, D. A.; Miley, G. H.
2000-01-01
Energy-conservative implicit integration schemes for the Fokker-Planck transport equation in multidimensional geometries require inverting a dense, non-symmetric matrix (Jacobian), which is very expensive to store and solve using standard solvers. However, these limitations can be overcome with Newton-Krylov iterative techniques, since they can be implemented Jacobian-free (the Jacobian matrix from Newton's algorithm is never formed nor stored to proceed with the iteration), and their convergence can be accelerated by preconditioning the original problem. In this document, the efficient numerical implementation of an implicit energy-conservative scheme for multidimensional Fokker-Planck problems using multigrid-preconditioned Krylov methods is discussed. Results show that multigrid preconditioning is very effective in speeding convergence and decreasing CPU requirements, particularly in fine meshes. The solver is demonstrated on grids up to 128×128 points in a 2D cylindrical velocity space (vr, vp) with implicit time steps of the order of the collisional time scale of the problem, τ. The method preserves particles exactly, and energy conservation is improved over alternative approaches, particularly in coarse meshes. Typical errors in the total energy over a time period of 10τ remain below a percent.
Nonlinear Krylov acceleration of reacting flow codes
Kumar, S.; Rawat, R.; Smith, P.; Pernice, M.
1996-12-31
We are working on computational simulations of three-dimensional reactive flows in applications encompassing a broad range of chemical engineering problems. Examples of such processes are coal (pulverized and fluidized bed) and gas combustion, petroleum processing (cracking), and metallurgical operations such as smelting. These simulations involve an interplay of various physical and chemical factors such as fluid dynamics with turbulence, convective and radiative heat transfer, multiphase effects such as fluid-particle and particle-particle interactions, and chemical reaction. The governing equations resulting from modeling these processes are highly nonlinear and strongly coupled, thereby rendering their solution by traditional iterative methods (such as nonlinear line Gauss-Seidel methods) very difficult and sometimes impossible. Hence we are exploring the use of nonlinear Krylov techniques (such as CMRES and Bi-CGSTAB) to accelerate and stabilize the existing solver. This strategy allows us to take advantage of the problem-definition capabilities of the existing solver. The overall approach amounts to using the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) method and its variants as nonlinear preconditioners for the nonlinear Krylov method. We have also adapted a backtracking approach for inexact Newton methods to damp the Newton step in the nonlinear Krylov method. This will be a report on work in progress. Preliminary results with nonlinear GMRES have been very encouraging: in many cases the number of line Gauss-Seidel sweeps has been reduced by about a factor of 5, and increased robustness of the underlying solver has also been observed.
Photometric Study of NPA Rotator (5247) Krylov
NASA Astrophysics Data System (ADS)
Lee, Hee-Jae; Moon, Hong-Kyu; Kim, Myung-Jin; Kim, Chun-Hwey; Durech, Josef; Choi, Young-Jun; Oh, Young-Seok; Park, Jintae; Roh, Dong-Goo; Yim, Hong-Suh; Cha, Sang-Mok; Lee, Yongseok
2017-06-01
We conduct BVRI and R band photometric observations of asteroid (5247) Krylov from January 2016 to April 2016 for 51 nights using the Korea Microlensing Telescope Network (KMTNet). The color indices of (5247) Krylov at the light curve maxima are determined as B - V = 0.841 ± 0.035, V - R = 0.418 ± 0.031, and V - I = 0.871 ± 0.031 where the phase angle is 14.1°. They are acquired after the standardization of BVRI instrumental measurements using the ensemble normalization technique. Based on the color indices, (5247) Krylov is classified as a S-type asteroid. Double periods, that is, a primary period P_{1} = 82.188±0.013 h and a secondary period P_{2} = 67.13±0.20 h are identified from period searches of its R band light curve. The light curve phases with P_{1} and this indicate that it is a typical Non-Principal Axis (NPA) asteroid. We discuss the possible causes of its NPA rotation.
Exponential-Krylov methods for ordinary differential equations
NASA Astrophysics Data System (ADS)
Tranquilli, Paul; Sandu, Adrian
2014-12-01
This paper develops a new family of exponential time discretization methods called exponential-Krylov (EXPK). The new schemes treat the time discretization and the Krylov-based approximation of exponential matrix-vector products as a single computational process. The classical order conditions theory developed herein accounts for both the temporal and the Krylov approximation errors. Unlike traditional exponential schemes, EXPK methods require the construction of only a single Krylov space at each timestep. The number of basis vectors that guarantee the temporal order of accuracy does not depend on the application at hand. Numerical results show favorable properties of EXPK methods when compared to current exponential schemes.
NASA Astrophysics Data System (ADS)
Jiang, Tian; Zhang, Yong-Tao
2016-04-01
Implicit integration factor (IIF) methods were developed in the literature for solving time-dependent stiff partial differential equations (PDEs). Recently, IIF methods were combined with weighted essentially non-oscillatory (WENO) schemes in Jiang and Zhang (2013) [19] to efficiently solve stiff nonlinear advection-diffusion-reaction equations. The methods can be designed for arbitrary order of accuracy. The stiffness of the system is resolved well and the methods are stable by using time step sizes which are just determined by the non-stiff hyperbolic part of the system. To efficiently calculate large matrix exponentials, Krylov subspace approximation is directly applied to the implicit integration factor (IIF) methods. So far, the IIF methods developed in the literature are multistep methods. In this paper, we develop Krylov single-step IIF-WENO methods for solving stiff advection-diffusion-reaction equations. The methods are designed carefully to avoid generating positive exponentials in the matrix exponentials, which is necessary for the stability of the schemes. We analyze the stability and truncation errors of the single-step IIF schemes. Numerical examples of both scalar equations and systems are shown to demonstrate the accuracy, efficiency and robustness of the new methods.
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
2017-04-12
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
A Hybrid Chebyshev Krylov Subspace Algorithm for Solving Nonsymmetric Systems of Linear Equations.
1984-02-01
eigencomponents have been developed in slightly different contexts by Sand and Sameh [191 and by Jesperson and 3uning [101. S. Numerical Experiments *In this...computing eigenelements of large unsymmetric matrices. Linear Algebra and its Applications 34:269-295, 1980. 1191 Y. Sand and A. Sameh . Iterative Methods for
Recovery Discontinuous Galerkin Jacobian-Free Newton-Krylov Method for All-Speed Flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
A novel numerical algorithm (rDG-JFNK) for all-speed fluid flows with heat conduction and viscosity is introduced. The rDG-JFNK combines the Discontinuous Galerkin spatial discretization with the implicit Runge-Kutta time integration under the Jacobian-free Newton-Krylov framework. We solve fully-compressible Navier-Stokes equations without operator-splitting of hyperbolic, diffusion and reaction terms, which enables fully-coupled high-order temporal discretization. The stability constraint is removed due to the L-stable Explicit, Singly Diagonal Implicit Runge-Kutta (ESDIRK) scheme. The governing equations are solved in the conservative form, which allows one to accurately compute shock dynamics, as well as low-speed flows. For spatial discretization, we develop a “recovery” family of DG, exhibiting nearly-spectral accuracy. To precondition the Krylov-based linear solver (GMRES), we developed an “Operator-Split”-(OS) Physics Based Preconditioner (PBP), in which we transform/simplify the fully-coupled system to a sequence of segregated scalar problems, each can be solved efficiently with Multigrid method. Each scalar problem is designed to target/cluster eigenvalues of the Jacobian matrix associated with a specific physics.
A Newton-Krylov solution to the porous medium equations in the agree code
Ward, A. M.; Seker, V.; Xu, Y.; Downar, T. J.
2012-07-01
In order to improve the convergence of the AGREE code for porous medium, a Newton-Krylov solver was developed for steady state problems. The current three-equation system was expanded and then coupled using Newton's Method. Theoretical behavior predicts second order convergence, while actual behavior was highly nonlinear. The discontinuous derivatives found in both closure and empirical relationships prevented true second order convergence. Agreement between the current solution and new Exact Newton solution was well below the convergence criteria. While convergence time did not dramatically decrease, the required number of outer iterations was reduced by approximately an order of magnitude. GMRES was also used to solve problem, where ILU without fill-in was used to precondition the iterative solver, and the performance was slightly slower than the direct solution. (authors)
Subspace Detectors: Efficient Implementation
Harris, D B; Paik, T
2006-07-26
The optimum detector for a known signal in white Gaussian background noise is the matched filter, also known as a correlation detector [Van Trees, 1968]. Correlation detectors offer exquisite sensitivity (high probability of detection at a fixed false alarm rate), but require perfect knowledge of the signal. The sensitivity of correlation detectors is increased by the availability of multichannel data, something common in seismic applications due to the prevalence of three-component stations and arrays. When the signal is imperfectly known, an extension of the correlation detector, the subspace detector, may be able to capture much of the performance of a matched filter [Harris, 2006]. In order to apply a subspace detector, the signal to be detected must be known to lie in a signal subspace of dimension d {ge} 1, which is defined by a set of d linearly-independent basis waveforms. The basis is constructed to span the range of signals anticipated to be emitted by a source of interest. Correlation detectors operate by computing a running correlation coefficient between a template waveform (the signal to be detected) and the data from a window sliding continuously along a data stream. The template waveform and the continuous data stream may be multichannel, as would be true for a three-component seismic station or an array. In such cases, the appropriate correlation operation computes the individual correlations channel-for-channel and sums the result (Figure 1). Both the waveform matching that occurs when a target signal is present and the cross-channel stacking provide processing gain. For a three-component station processing gain occurs from matching the time-history of the signals and their polarization structure. The projection operation that is at the heart of the subspace detector can be expensive to compute if implemented in a straightforward manner, i.e. with direct-form convolutions. The purpose of this report is to indicate how the projection can be
An Inexact Newton–Krylov Algorithm for Constrained Diffeomorphic Image Registration*
Mang, Andreas; Biros, George
2016-01-01
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H1- or H2-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton–Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton–Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
Protected subspace Ramsey spectroscopy
NASA Astrophysics Data System (ADS)
Ostermann, L.; Plankensteiner, D.; Ritsch, H.; Genes, C.
2014-11-01
We study a modified Ramsey spectroscopy technique employing slowly decaying states for quantum metrology applications using dense ensembles. While closely positioned atoms exhibit super-radiant collective decay and dipole-dipole induced frequency shifts, recent results [L. Ostermann, H. Ritsch, and C. Genes, Phys. Rev. Lett. 111, 123601 (2013), 10.1103/PhysRevLett.111.123601] suggest the possibility to suppress such detrimental effects and achieve an even better scaling of the frequency sensitivity with interrogation time than for noninteracting particles. Here we present an in-depth analysis of this "protected subspace Ramsey technique" using improved analytical modeling and numerical simulations including larger three-dimensional (3D) samples. Surprisingly we find that using subradiant states of N particles to encode the atomic coherence yields a scaling of the optimal sensitivity better than 1 /√{N } . Applied to ultracold atoms in 3D optical lattices we predict a precision beyond the single atom linewidth.
Block Krylov-Schur method for large symmetric eigenvalue problems
NASA Astrophysics Data System (ADS)
Zhou, Yunkai; Saad, Yousef
2008-04-01
Stewart's Krylov-Schur algorithm offers two advantages over Sorensen's implicitly restarted Arnoldi (IRA) algorithm. The first is ease of deflation of converged Ritz vectors, the second is the avoidance of the potential forward instability of the QR algorithm. In this paper we develop a block version of the Krylov-Schur algorithm for symmetric eigenproblems. Details of this block algorithm are discussed, including how to handle rank deficient cases and how to use varying block sizes. Numerical results on the efficiency of the block Krylov-Schur method are reported.
Some experiences with Krylov vectors and Lanczos vectors
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Su, Tzu-Jeng; Kim, Hyoung M.
1993-01-01
This paper illustrates the use of Krylov vectors and Lanczos vectors for reduced-order modeling in structural dynamics and for control of flexible structures. Krylov vectors and Lanczos vectors are defined and illustrated, and several applications that have been under study at The University of Texas at Austin are reviewed: model reduction for undamped structural dynamics systems, component mode synthesis using Krylov vectors, model reduction of damped structural dynamics systems, and one-sided and two-sided unsymmetric block-Lanczos model-reduction algorithms.
Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla
2013-12-01
To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).
Quasi-splitting subspaces and Foulis-Randall subspaces
NASA Astrophysics Data System (ADS)
Buhagiar, D.; Chetcuti, E.; Dvurečenskij, A.
2011-12-01
For a pre-Hilbert space S, let F(S) denote the orthogonally closed subspaces, Eq(S) the quasi-splitting subspaces, E(S) the splitting subspaces, D(S) the Foulis-Randall subspaces, and R(S) the maximal Foulis-Randall subspaces, of S. It was an open problem whether the equalities D(S) = F(S) and E(S) = R(S) hold in general [Cattaneo, G. and Marino, G., "Spectral decomposition of pre-Hilbert spaces as regard to suitable classes of normal closed operators," Boll. Unione Mat. Ital. 6 1-B, 451-466 (1982); Cattaneo, G., Franco, G., and Marino, G., "Ordering of families of subspaces of pre-Hilbert spaces and Dacey pre-Hilbert spaces," Boll. Unione Mat. Ital. 71-B, 167-183 (1987); Dvurečenskij, A., Gleason's Theorem and Its Applications (Kluwer, Dordrecht, 1992), p. 243.]. We prove that the first equality is true and exhibit a pre-Hilbert space S for which the second equality fails. In addition, we characterize complete pre-Hilbert spaces as follows: S is a Hilbert space if, and only if, S has an orthonormal basis and Eq(S) admits a non-free charge.
NASA Astrophysics Data System (ADS)
Borgelt, Christian
In clustering we often face the situation that only a subset of the available attributes is relevant for forming clusters, even though this may not be known beforehand. In such cases it is desirable to have a clustering algorithm that automatically weights attributes or even selects a proper subset. In this paper I study such an approach for fuzzy clustering, which is based on the idea to transfer an alternative to the fuzzifier (Klawonn and Höppner, What is fuzzy about fuzzy clustering? Understanding and improving the concept of the fuzzifier, In: Proc. 5th Int. Symp. on Intelligent Data Analysis, 254-264, Springer, Berlin, 2003) to attribute weighting fuzzy clustering (Keller and Klawonn, Int J Uncertain Fuzziness Knowl Based Syst 8:735-746, 2000). In addition, by reformulating Gustafson-Kessel fuzzy clustering, a scheme for weighting and selecting principal axes can be obtained. While in Borgelt (Feature weighting and feature selection in fuzzy clustering, In: Proc. 17th IEEE Int. Conf. on Fuzzy Systems, IEEE Press, Piscataway, NJ, 2008) I already presented such an approach for a global selection of attributes and principal axes, this paper extends it to a cluster-specific selection, thus arriving at a fuzzy subspace clustering algorithm (Parsons, Haque, and Liu, 2004).
A Parallel Newton-Krylov-Schur Algorithm for the Reynolds-Averaged Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Osusky, Michal
Aerodynamic shape optimization and multidisciplinary optimization algorithms have the potential not only to improve conventional aircraft, but also to enable the design of novel configurations. By their very nature, these algorithms generate and analyze a large number of unique shapes, resulting in high computational costs. In order to improve their efficiency and enable their use in the early stages of the design process, a fast and robust flow solution algorithm is necessary. This thesis presents an efficient parallel Newton-Krylov-Schur flow solution algorithm for the three-dimensional Navier-Stokes equations coupled with the Spalart-Allmaras one-equation turbulence model. The algorithm employs second-order summation-by-parts (SBP) operators on multi-block structured grids with simultaneous approximation terms (SATs) to enforce block interface coupling and boundary conditions. The discrete equations are solved iteratively with an inexact-Newton method, while the linear system at each Newton iteration is solved using the flexible Krylov subspace iterative method GMRES with an approximate-Schur parallel preconditioner. The algorithm is thoroughly verified and validated, highlighting the correspondence of the current algorithm with several established flow solvers. The solution for a transonic flow over a wing on a mesh of medium density (15 million nodes) shows good agreement with experimental results. Using 128 processors, deep convergence is obtained in under 90 minutes. The solution of transonic flow over the Common Research Model wing-body geometry with grids with up to 150 million nodes exhibits the expected grid convergence behavior. This case was completed as part of the Fifth AIAA Drag Prediction Workshop, with the algorithm producing solutions that compare favourably with several widely used flow solvers. The algorithm is shown to scale well on over 6000 processors. The results demonstrate the effectiveness of the SBP-SAT spatial discretization, which can
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
Covariance Modifications to Subspace Bases
Harris, D B
2008-11-19
Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Optimal shape design of aerodynamic configurations: A Newton-Krylov approach
NASA Astrophysics Data System (ADS)
Nemec, Marian
Optimal shape design of aerodynamic configurations is a challenging problem due to the nonlinear effects of complex flow features such as shock waves, boundary layers, and separation. A Newton-Krylov algorithm is presented for aerodynamic design using gradient-based numerical optimization. The flow is governed by the two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized on multi-block structured grids. The discrete-adjoint method is applied to compute the objective function gradient. The adjoint equation is solved using the preconditioned generalized minimal residual (GMRES) method. A novel preconditioner is introduced, and together with a complete differentiation of the discretized Navier-Stokes and turbulence model equations, this results in an accurate and efficient evaluation of the gradient. The gradient is obtained in just one-fifth to one-half of the time required to converge a flow solution. Furthermore, fast flow solutions are obtained using the same preconditioned GMRES method in conjunction with an inexact-Newton approach. Optimization constraints are enforced through a penalty formulation, and the resulting unconstrained problem is solved via a quasi-Newton method. The performance of the new algorithm is demonstrated for several design examples that include lift enhancement, where the optimal position of a flap is determined within a high-lift configuration, lift-constrained drag minimization at multiple transonic operating points, and the computation of a Pareto front based on competing objectives. In all examples, the gradient is reduced by several orders of magnitude, indicating that a local minimum has been obtained. Overall, the results show that the new algorithm is among the fastest presently available for aerodynamic shape optimization and provides an effective approach for practical aerodynamic design.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; Tuminaro, R. S.; Chacon, L.; Weber, P. D.
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method, and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.
Subspace Arrangement Codes and Cryptosystems
2011-05-09
Signature Date Acceptance for the Trident Scholar Committee Professor Carl E. Wick Associate Director of Midshipmen Research Signature Date SUBSPACE...Professor William Traves. I also thank Professor Carl Wick and the Trident Scholar Committee for providing me with the opportunity to conduct this... Sagan . Why the characteristic polynomial factors. Bulletin of the American Mathematical Society, 36(2):113–133, February 1999. [16] Karen E. Smith
Three-dimensional transient electromagnetic modelling using Rational Krylov methods
NASA Astrophysics Data System (ADS)
Börner, Ralph-Uwe; Ernst, Oliver G.; Güttel, Stefan
2015-09-01
A computational method is given for solving the forward modelling problem for transient electromagnetic exploration. Its key features are the discretization of the quasi-static Maxwell's equations in space using the first-kind family of curl-conforming Nédélec elements combined with time integration using rational Krylov methods. We show how rational Krylov methods can also be used to solve the same problem in the frequency domain followed by a synthesis of the transient solution using the fast Hankel transform, and we argue that the pure time-domain solution is more efficient. We also propose a new surrogate optimization approach for selecting the pole parameters of the rational Krylov method which leads to convergence within an a priori determined number of iterations independent of mesh size and conductivity structure. These poles are repeated in a cyclic fashion, which, in combination with direct solvers for the discrete problem, results in significantly faster solution times than previously proposed schemes.
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Transmission Subspace Tracing for MIMO Communications Systems
2001-11-01
multiple - input multiple - output ( MIMO ...3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE TRANSMISSION SUBSPACE TRACKING FOR MULTIPLE - INPUT MULTIPLE - OUPUT ( MIMO ) COMMUNICATIONS SYSTEMS...publications/pubs/index.html 14. ABSTRACT This paper describes the benefits of transmission subspace tracking for multiple input multiple output
Jacobi method for signal subspace computation
NASA Astrophysics Data System (ADS)
Paul, Steffen; Goetze, Juergen
1997-10-01
The Jacobi method for singular value decomposition is well-suited for parallel architectures. Its application to signal subspace computations is well known. Basically the subspace spanned by singular vectors of large singular values are separated from subspace spanned by those of small singular values. The Jacobi algorithm computes the singular values and the corresponding vectors in random order. This requires sorting the result after convergence of the algorithm to select the signal subspace. A modification of the Jacobi method based on a linear objective function merges the sorting into the SVD-algorithm at little extra cost. In fact, the complexity of the diagonal processor cells in a triangular array get slightly larger. In this paper we present these extensions, in particular the modified algorithm for computing the rotation angles and give an example of its usefulness for subspace separation.
2012-04-20
NVIDIA, Oracle, and Samsung , U.S. DOE grants DE-SC0003959, DE-AC02-05-CH11231, Lawrence Berkeley National Laboratory, and NSF SDCI under Grant Number OCI...gradient method [19]. Van Rosendale’s implementation was motivated by exposing more parallelism using the PRAM model. Chronopoulous and Gear later created...matrix for no additional communication cost. The additional computation cost is O( s2 ) per s steps. For terms in 2. above, we have 2 choices. The rst
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
A Hybrid, Parallel Krylov Solver for MODFLOW using Schwarz Domain Decomposition
NASA Astrophysics Data System (ADS)
Sutanudjaja, E.; Verkaik, J.; Hughes, J. D.
2015-12-01
In order to support decision makers in solving hydrological problems, detailed high-resolution models are often needed. These models typically consist of a large number of computational cells and have large memory requirements and long run times. An efficient technique for obtaining realistic run times and memory requirements is parallel computing, where the problem is divided over multiple processor cores. The new Parallel Krylov Solver (PKS) for MODFLOW-USG is presented. It combines both distributed memory parallelization by the Message Passing Interface (MPI) and shared memory parallelization by Open Multi-Processing (OpenMP). PKS includes conjugate gradient and biconjugate gradient stabilized linear accelerators that are both preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using the METIS library; b) each subdomain uses local memory only and communicates with other subdomains by MPI within the linear accelerator; c) is fully integrated in the MODFLOW-USG code. PKS is based on the unstructured PCGU-solver, and supports OpenMP. Depending on the available hardware, PKS can run exclusively with MPI, exclusively with OpenMP, or with a hybrid MPI/OpenMP approach. Benchmarks were performed on the Cartesius Dutch supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 144 cores, for a synthetic test (~112 million cells) and the Indonesia groundwater model (~4 million 1km cells). The latter, which includes all islands in the Indonesian archipelago, was built using publically available global datasets, and is an ideal test bed for evaluating the applicability of PKS parallelization techniques to a global groundwater model consisting of multiple continents and islands. Results show that run time reductions can be greatest with the hybrid parallelization approach for the problems tested.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. The Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. The Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
The Subspace Voyager: Exploring High-Dimensional Data along a Continuum of Salient 3D Subspace.
Wang, Bing; Mueller, Klaus
2017-02-23
Analyzing high-dimensional data and finding hidden patterns is a difficult problem and has attracted numerous research efforts. Automated methods can be useful to some extent but bringing the data analyst into the loop via interactive visual tools can help the discovery process tremendously. An inherent problem in this effort is that humans lack the mental capacity to truly understand spaces exceeding three spatial dimensions. To keep within this limitation, we describe a framework that decomposes a high-dimensional data space into a continuum of generalized 3D subspaces. Analysts can then explore these 3D subspaces individually via the familiar trackball interface while using additional facilities to smoothly transition to adjacent subspaces for expanded space comprehension. Since the number of such subspaces suffers from combinatorial explosion, we provide a set of data-driven subspace selection and navigation tools which can guide users to interesting subspaces and views. A subspace trail map allows users to manage the explored subspaces, keep their bearings, and return to interesting subspaces and views. Both trackball and trail map are each embedded into a word cloud of attribute labels which aid in navigation. We demonstrate our system via several use cases in a diverse set of application areas - cluster analysis and refinement, information discovery, and supervised training of classifiers. We also report on a user study that evaluates the usability of the various interactions our system provides.
Face recognition with L1-norm subspaces
NASA Astrophysics Data System (ADS)
Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.
2016-05-01
We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.
Numerical considerations in computing invariant subspaces
Dongarra, J.J. . Dept. of Computer Science Oak Ridge National Lab., TN ); Hammarling, S. ); Wilkinson, J.H. )
1990-11-01
This paper describes two methods for computing the invariant subspace of a matrix. The first involves using transformations to interchange the eigenvalues; the second involves direct computation of the vectors. 10 refs.
Numerical solution of large nonsymmetric eigenvalue problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
Several methods are discribed for combinations of Krylov subspace techniques, deflation procedures and preconditionings, for computing a small number of eigenvalues and eigenvectors or Schur vectors of large sparse matrices. The most effective techniques for solving realistic problems from applications are those methods based on some form of preconditioning and one of several Krylov subspace techniques, such as Arnoldi's method or Lanczos procedure. Two forms of preconditioning are considered: shift-and-invert and polynomial acceleration. The latter presents some advantages for parallel/vector processing but may be ineffective if eigenvalues inside the spectrum are sought. Some algorithmic details are provided that improve the reliability and effectiveness of these techniques.
NASA Astrophysics Data System (ADS)
Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.
2017-05-01
We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.
Preconditioned cues have no value.
Sharpe, Melissa J; Batchelor, Hannah M; Schoenbaum, Geoffrey
2017-09-19
Sensory preconditioning has been used to implicate midbrain dopamine in model-based learning, contradicting the view that dopamine transients reflect model-free value. However, it has been suggested that model-free value might accrue directly to the preconditioned cue through mediated learning. Here, building on previous work (Sadacca et al., 2016), we address this question by testing whether a preconditioned cue will support conditioned reinforcement in rats. We found that while both directly conditioned and second-order conditioned cues supported robust conditioned reinforcement, a preconditioned cue did not. These data show that the preconditioned cue in our procedure does not directly accrue model-free value and further suggest that the cue may not necessarily access value even indirectly in a model-based manner. If so, then phasic response of dopamine neurons to cues in this setting cannot be described as signaling errors in predicting value.
Subspace algorithms for noise reduction in cochlear implants
NASA Astrophysics Data System (ADS)
Loizou, Philipos C.; Lobo, Arthur; Hu, Yi
2005-11-01
A single-channel algorithm is proposed for noise reduction in cochlear implants. The proposed algorithm is based on subspace principles and projects the noisy speech vector onto ``signal'' and ``noise'' subspaces. An estimate of the clean signal is made by retaining only the components in the signal subspace. The performance of the subspace reduction algorithm is evaluated using 14 subjects wearing the Clarion device. Results indicated that the subspace algorithm produced significant improvements in sentence recognition scores compared to the subjects' daily strategy, at least in stationary noise. Further work is needed to extend the subspace algorithm to nonstationary noise environments.
Nonlinear Krylov and moving nodes in the method of lines
NASA Astrophysics Data System (ADS)
Miller, Keith
2005-11-01
We report on some successes and problem areas in the Method of Lines from our work with moving node finite element methods. First, we report on our "nonlinear Krylov accelerator" for the modified Newton's method on the nonlinear equations of our stiff ODE solver. Since 1990 it has been robust, simple, cheap, and automatic on all our moving node computations. We publicize further trials with it here because it should be of great general usefulness to all those solving evolutionary equations. Second, we discuss the need for reliable automatic choice of spatially variable time steps. Third, we discuss the need for robust and efficient iterative solvers for the difficult linearized equations (Jx=b) of our stiff ODE solver. Here, the 1997 thesis of Zulu Xaba has made significant progress.
NASA Astrophysics Data System (ADS)
Nocera, A.; Alvarez, G.
2016-11-01
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
General purpose nonlinear system solver based on Newton-Krylov method.
2013-12-01
KINSOL is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/Algebraic equation Solvers [1]. KINSOL is a general-purpose nonlinear system solver based on Newton-Krylov and fixed-point solver technologies [2].
Nocera, A; Alvarez, G
2016-11-01
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
Kim, Sang-Woon; Oommen, B John
2005-01-01
In Kernel-based Nonlinear Subspace (KNS) methods, the subspace dimensions have a strong influence on the performance of the subspace classifier. In order to get a high classification accuracy, a large dimension is generally required. However, if the chosen subspace dimension is too large, it leads to a low performance due to the overlapping of the resultant subspaces and, if it is too small, it increases the classification error due to the poor resulting approximation. The most common approach is of an ad hoc nature, which selects the dimensions based on the so-called cumulative proportion computed from the kernel matrix for each class. In this paper, we propose a new method of systematically and efficiently selecting optimal or near-optimal subspace dimensions for KNS classifiers using a search strategy and a heuristic function termed the Overlapping criterion. The rationale for this function has been motivated in the body of the paper. The task of selecting optimal subspace dimensions is reduced to finding the best ones from a given problem-domain solution space using this criterion as a heuristic function. Thus, the search space can be pruned to very efficiently find the best solution. Our experimental results demonstrate that the proposed mechanism selects the dimensions efficiently without sacrificing the classification accuracy.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l1 -, l2 -, l∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
Iterated preconditioned LSQR method for inverse problems on unstructured grids
NASA Astrophysics Data System (ADS)
Arridge, S. R.; Betcke, M. M.; Harhanen, L.
2014-06-01
This article presents a method for solving large-scale linear inverse imaging problems regularized with a nonlinear, edge-preserving penalty term such as total variation or the Perona-Malik technique. Our method is aimed at problems defined on unstructured meshes, where such regularizers naturally arise in unfactorized form as a stiffness matrix of an anisotropic diffusion operator and factorization is prohibitively expensive. In the proposed scheme, the nonlinearity is handled with lagged diffusivity fixed point iteration, which involves solving a large-scale linear least squares problem in each iteration. Because the convergence of Krylov methods for problems with discontinuities is notoriously slow, we propose to accelerate it by means of priorconditioning (Bayesian preconditioning). priorconditioning is a technique that, through transformation to the standard form, embeds the information contained in the prior (Bayesian interpretation of a regularizer) directly into the forward operator and thence into the solution space. We derive a factorization-free preconditioned LSQR algorithm (MLSQR), allowing implicit application of the preconditioner through efficient schemes such as multigrid. The resulting method is also matrix-free i.e. the forward map can be defined through its action on a vector. We illustrate the performance of the method on two numerical examples. Simple 1D-deblurring problem serves to visualize the discussion throughout the paper. The effectiveness of the proposed numerical scheme is demonstrated on a three-dimensional problem in fluorescence diffuse optical tomography with total variation regularization derived algebraic multigrid preconditioner, which is the type of large scale, unstructured mesh problem, requiring matrix-free and factorization-free approaches that motivated the work here.
Biomarkers spectral subspace for cancer detection.
Sun, Yi; Pu, Yang; Yang, Yuanlong; Alfano, Robert R
2012-10-01
A novel approach to cancer detection in biomarkers spectral subspace (BSS) is proposed. The basis spectra of the subspace spanned by fluorescence spectra of biomarkers are obtained by the Gram-Schmidt method. A support vector machine classifier (SVM) is trained in the subspace. The spectrum of a sample tissue is projected onto and is classified in the subspace. In addition to sensitivity and specificity, the metrics of positive predictivity, Score1, maximum Score1, and accuracy (AC) are employed for performance evaluation. The proposed BSS using SVM is applied to breast cancer detection using four biomarkers: collagen, NADH, flavin, and elastin, with 340-nm excitation. It is found that the BSS SVM outperforms the approach based on multivariate curve resolution (MCR) using SVM and achieves the best performance of principal component analysis (PCA) using SVM among all combinations of PCs. The descent order of efficacy of the four biomarkers in the breast cancer detection of this experiment is collagen, NADH, elastin, and flavin. The advantage of BSS is twofold. First, all diagnostically useful information of biomarkers for cancer detection is retained while dimensionality of data is significantly reduced to obviate the curse of dimensionality. Second, the efficacy of biomarkers in cancer detection can be determined.
A property of subspaces admitting spectral synthesis
Abuzyarova, N F
1999-04-30
Let H be the space of holomorphic functions in a convex domain G subset of C. The following result is established: each closed subspace W subset of H that is invariant with respect to the operator of differentiation and admits spectral synthesis can be represented as the solution set of two (possibly coinciding) homogeneous convolution equations.
Subspace Identification with Multiple Data Sets
NASA Technical Reports Server (NTRS)
Duchesne, Laurent; Feron, Eric; Paduano, James D.; Brenner, Marty
1995-01-01
Most existing subspace identification algorithms assume that a single input to output data set is available. Motivated by a real life problem on the F18-SRA experimental aircraft, we show how these algorithms are readily adapted to handle multiple data sets. We show by means of an example the relevance of such an improvement.
A nonconforming multigrid method using conforming subspaces
NASA Technical Reports Server (NTRS)
Lee, Chang Ock
1993-01-01
For second-order elliptic boundary value problems, we develop a nonconforming multigrid method using the coarser-grid correction on the conforming finite element subspaces. The convergence proof with an arbitrary number of smoothing steps for nu-cycle is presented.
Applications of Subspace Seismicity Detection in Antarctica
NASA Astrophysics Data System (ADS)
Myers, E. K.; Aster, R. C.; Benz, H.; McMahon, N. D.; McNamara, D. E.; Lough, A. C.; Wiens, D. A.; Wilson, T. J.
2014-12-01
Subspace detection can improve event recognition by enhancing the completeness of earthquake catalogs and by improving the characterization and interpretation of seismic events, particularly in regions of clustered seismicity. Recent deployments of dense networks of seismometers enable subspace detection methods to be more broadly applied to intraplate Antarctica, where historically very limited and sporadic network coverage has inhibited understanding of dynamic glacial, volcanic, and tectonic processes. In particular, recent broad seismographic networks such as POLENET/A-Net and AGAP provide significant new opportunities for characterizing and understanding the low seismicity rates of this continent. Our methodology incorporates three-component correlation to detect events in a statistical and adaptive framework. Detection thresholds are statistically assessed using phase-randomized template correlation levels. As new events are detected and the set of subspace basis vectors is updated, the algorithm can also be directed to scan back in a search for weaker prior events that have significant correlations with the updated basis vectors. This method has the resolving power to identify previously undetected areas of seismic activity under very low signal-to-noise conditions, and thus holds promise for revealing new seismogenic phenomena within and around Antarctica. In this study we investigate two intriguing seismogenic regions and demonstrate the methodology, reporting on a subspace detection-based study of recently identified clusters of deep long-period magmatic earthquakes in Marie Byrd Land, and on shallow icequakes that are dynamically triggered by teleseismic surface waves.
Robust Latent Subspace Learning for Image Classification.
Fang, Xiaozhao; Teng, Shaohua; Lai, Zhihui; He, Zhaoshui; Xie, Shengli; Wong, Wai Keung
2017-05-10
This paper proposes a novel method, called robust latent subspace learning (RLSL), for image classification. We formulate an RLSL problem as a joint optimization problem over both the latent SL and classification model parameter predication, which simultaneously minimizes: 1) the regression loss between the learned data representation and objective outputs and 2) the reconstruction error between the learned data representation and original inputs. The latent subspace can be used as a bridge that is expected to seamlessly connect the origin visual features and their class labels and hence improve the overall prediction performance. RLSL combines feature learning with classification so that the learned data representation in the latent subspace is more discriminative for classification. To learn a robust latent subspace, we use a sparse item to compensate error, which helps suppress the interference of noise via weakening its response during regression. An efficient optimization algorithm is designed to solve the proposed optimization problem. To validate the effectiveness of the proposed RLSL method, we conduct experiments on diverse databases and encouraging recognition results are achieved compared with many state-of-the-arts methods.
HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-06-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh-Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Preconditioning for traumatic brain injury.
Yokobori, Shoji; Mazzeo, Anna T; Hosein, Khadil; Gajavelli, Shyam; Dietrich, W Dalton; Bullock, M Ross
2013-02-01
Traumatic brain injury (TBI) treatment is now focused on the prevention of primary injury and reduction of secondary injury. However, no single effective treatment is available as yet for the mitigation of traumatic brain damage in humans. Both chemical and environmental stresses applied before injury have been shown to induce consequent protection against post-TBI neuronal death. This concept termed "preconditioning" is achieved by exposure to different pre-injury stressors to achieve the induction of "tolerance" to the effect of the TBI. However, the precise mechanisms underlying this "tolerance" phenomenon are not fully understood in TBI, and therefore even less information is available about possible indications in clinical TBI patients. In this review, we will summarize TBI pathophysiology, and discuss existing animal studies demonstrating the efficacy of preconditioning in diffuse and focal type of TBI. We will also review other non-TBI preconditioning studies, including ischemic, environmental, and chemical preconditioning, which maybe relevant to TBI. To date, no clinical studies exist in this field, and we speculate on possible future clinical situations, in which pre-TBI preconditioning could be considered.
Angular-Similarity-Preserving Binary Signatures for Linear Subspaces.
Ji, Jianqiu; Li, Jianmin; Tian, Qi; Yan, Shuicheng; Zhang, Bo
2015-11-01
We propose a similarity-preserving binary signature method for linear subspaces. In computer vision and pattern recognition, linear subspace is a very important representation for many kinds of data, such as face images, action and gesture videos, and so on. When there is a large amount of subspace data and the ambient dimension is high, the cost of computing the pairwise similarity between the subspaces would be high and it requires a large storage space for storing the subspaces. In this paper, we first define the angular similarity and angular distance between the subspaces. Then, based on this similarity definition, we develop a similarity-preserving binary signature method for linear subspaces, which transforms a linear subspace into a compact binary signature, and the Hamming distance between two signatures provides an unbiased estimate of the angular similarity between the two subspaces. We also provide a lower bound of the signature length sufficient to guarantee uniform distance-preservation between every pair of subspaces in a set. Experiments on face recognition, gesture recognition, and action recognition verify the effectiveness of the proposed method.
Pharmacologic Preconditioning: Translating the Promise
Gidday, Jeffrey M.
2010-01-01
A transient, ischemia-resistant phenotype known as “ischemic tolerance” can be established in brain in a rapid or delayed fashion by a preceding noninjurious “preconditioning” stimulus. Initial preclinical studies of this phenomenon relied primarily on brief periods of ischemia or hypoxia as preconditioning stimuli, but it was later realized that many other stressors, including pharmacologic ones, are also effective. This review highlights the surprisingly wide variety of drugs now known to promote ischemic tolerance, documented and to some extent mechanistically characterized in preclinical animal models of stroke. Although considerably more experimentation is needed to thoroughly validate the ability of any currently identified preconditioning agent to protect ischemic brain, the fact that some of these drugs are already clinically approved for other indications implies that the growing enthusiasm for translational success in the field of pharmacologic preconditioning may be well justified. PMID:21197121
Preconditioning and stem cell survival.
Haider, Husnain Kh; Ashraf, Muhammad
2010-04-01
The harsh ischemic and cytokine-rich microenvironment in the infarcted myocardium, infiltrated by the inflammatory and immune cells, offers a significant challenge to the transplanted donor stem cells. Massive cell death occurs during transplantation as well as following engraftment which significantly lowers the effectiveness of the heart cell therapy. Various approaches have been adopted to overcome this problem nevertheless with multiple limitations with each of these current approaches. Cellular preconditioning and reprogramming by physical, chemical, genetic, and pharmacological manipulation of the cells has shown promise and "prime" the cells to the "state of readiness" to withstand the rigors of lethal ischemia in vitro as well as posttransplantation. This review summarizes the past and present novel approaches of ischemic preconditioning, pharmacological and genetic manipulation using preconditioning mimetics, recombinant growth factor protein treatment, and reprogramming of stem cells to overexpress survival signaling molecules, microRNAs, and trophic factors for intracrine, autocrine, and paracrine effects on cytoprotection.
Comparison results for solving preconditioned linear systems
NASA Astrophysics Data System (ADS)
Li, Wen
2005-04-01
In this paper we present some comparison theorems between two different modified Gauss-Seidel (MGS) methods. The second preconditioning based on the first preconditioning is also discussed in this paper.
Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme
Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; Flueck, Alexander J.; Maldonado, Daniel A.
2016-11-07
Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including a real utility network, show good scalability on different computing architectures.
Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme
Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...
2016-11-07
Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less
Implementing a matrix-free Newton-Krylov method in NorESM
NASA Astrophysics Data System (ADS)
Pilskog, Ingjald; Khatiwala, Samar; Tjiputra, Jerry
2017-04-01
Quasi-equilibrium ocean biogeochemistry states in Earth system models require prohibitively demanding computational time, especially for when large number of tracers are involved. This so-called spin-up typically is measured in the order of thousands of model years integration. In this study, we implement a matrix-free Newton-Krylov method (Khatiwala, 2008) in the Norwegian Earth system model (NorESM) so the spin-up time can be reduced. The idea is to construct the function F(u) = Φ(u(0),T) -u(0) = 0, which can be expressed as a matrix in which we can apply Newton-Krylov methods to find the quasi-equilibrium states. Unfortunately the interconnectivity and complexity of the processes leads to a dense matrix, making it expensive and impractical to calculate the necessary Jacobian, J = ∂F/∂u. The Newton-Krylov method remedies this issue through solving the matrix-vector product Jδu, that can be approximated by (F (un + σδun)- F (un- 1))/σ. The differencing parameterσ is typical chosen dynamically, and n is the iteration index. Matrix-free Newton-Krylov method requires a good preconditioner to improve the convergence rate. By exploiting the inherent locality of the advection-diffusion operator, and that in most biogeochemical models, the source/sink term at a grid point depends only on tracer concentrations in the same vertical column, we obtain a good, sparse preconditioner. The performance of this preconditioner can be improved again by applying both outer Broyden updates during the Newton steps and inner Broyden updates during the Krylov steps. Khatiwala, S., 2008. Fast spin up of ocean biogeochemical models using matrix-free Newton-Krylov. Ocean Model. 23 (3-4), 121-129.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Subspace controllability of spin-1/2 chains with symmetries
NASA Astrophysics Data System (ADS)
Wang, Xiaoting; Burgarth, Daniel; Schirmer, S.
2016-11-01
We develop a technique to prove simultaneous subspace controllability on multiple invariant subspaces, which specifically enables us study the controllability properties of spin systems that are not amenable to standard controllability arguments based on energy level connectivity graphs or simple induction arguments on the length of the chain. The technique is applied to establish simultaneous subspace controllability for Heisenberg spin chains subject to limited local controls. This model is theoretically important and the controllability result shows that a single control can be sufficient for complete controllability of an exponentially large subspace and universal quantum computation in the exponentially large subspace. The controllability results are extended to prove subspace controllability in the presence of control field leakage and discuss minimal control resources required to achieve controllability over the entire spin chain space.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
Indoor Subspacing to Implement Indoorgml for Indoor Navigation
NASA Astrophysics Data System (ADS)
Jung, H.; Lee, J.
2015-10-01
According to an increasing demand for indoor navigation, there are great attempts to develop applicable indoor network. Representation for a room as a node is not sufficient to apply complex and large buildings. As OGC established IndoorGML, subspacing to partition the space for constructing logical network is introduced. Concerning subspacing for indoor network, transition space like halls or corridors also have to be considered. This study presents the subspacing process for creating an indoor network in shopping mall. Furthermore, categorization of transition space is performed and subspacing of this space is considered. Hall and squares in mall is especially defined for subspacing. Finally, implementation of subspacing process for indoor network is presented.
NASA Astrophysics Data System (ADS)
Simmons, Alex; Yang, Qianqian; Moroney, Timothy
2015-04-01
The numerical solution of fractional partial differential equations poses significant computational challenges in regard to efficiency as a result of the spatial nonlocality of the fractional differential operators. The dense coefficient matrices that arise from spatial discretisation of these operators mean that even one-dimensional problems can be difficult to solve using standard methods on grids comprising thousands of nodes or more. In this work we address this issue of efficiency for one-dimensional, nonlinear space-fractional reaction-diffusion equations with fractional Laplacian operators. We apply variable-order, variable-stepsize backward differentiation formulas in a Jacobian-free Newton-Krylov framework to advance the solution in time. A key advantage of this approach is the elimination of any requirement to form the dense matrix representation of the fractional Laplacian operator. We show how a banded approximation to this matrix, which can be formed and factorised efficiently, can be used as part of an effective preconditioner that accelerates convergence of the Krylov subspace iterative solver. Our approach also captures the full contribution from the nonlinear reaction term in the preconditioner, which is crucial for problems that exhibit stiff reactions. Numerical examples are presented to illustrate the overall effectiveness of the solver.
Learning Robust and Discriminative Subspace With Low-Rank Constraints.
Li, Sheng; Fu, Yun
2016-11-01
In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.
Optimizing Cubature for Efficient Integration of Subspace Deformations
An, Steven S.; Kim, Theodore; James, Doug L.
2009-01-01
We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777
Preconditioned iterative methods for fractional diffusion equation
NASA Astrophysics Data System (ADS)
Lin, Fu-Rong; Yang, Shi-Wei; Jin, Xiao-Qing
2014-01-01
In this paper, we are concerned with numerical methods for the solution of initial-boundary value problems of anomalous diffusion equations of order α∈(1,2). The classical Crank-Nicholson method is used to discretize the fractional diffusion equation and then the spatial extrapolation is used to obtain temporally and spatially second-order accurate numerical estimates. Two preconditioned iterative methods, namely, the preconditioned generalized minimal residual (preconditioned GMRES) method and the preconditioned conjugate gradient for normal residual (preconditioned CGNR) method, are proposed to solve relevant linear systems. Numerical experiments are given to illustrate the efficiency of the methods.
Scalable parallel Newton-Krylov solvers for discontinuous Galerkin discretizations
Persson, P.-O.
2008-12-31
We present techniques for implicit solution of discontinuous Galerkin discretizations of the Navier-Stokes equations on parallel computers. While a block-Jacobi method is simple and straight-forward to parallelize, its convergence properties are poor except for simple problems. Therefore, we consider Newton-GMRES methods preconditioned with block-incomplete LU factorizations, with optimized element orderings based on a minimum discarded fill (MDF) approach. We discuss the difficulties with the parallelization of these methods, but also show that with a simple domain decomposition approach, most of the advantages of the block-ILU over the block-Jacobi preconditioner are still retained. The convergence is further improved by incorporating the matrix connectivities into the mesh partitioning process, which aims at minimizing the errors introduced from separating the partitions. We demonstrate the performance of the schemes for realistic two- and three-dimensional flow problems.
The variational subspace valence bond method
Fletcher, Graham D.
2015-04-07
The variational subspace valence bond (VSVB) method based on overlapping orbitals is introduced. VSVB provides variational support against collapse for the optimization of overlapping linear combinations of atomic orbitals (OLCAOs) using modified orbital expansions, without recourse to orthogonalization. OLCAO have the advantage of being naturally localized, chemically intuitive (to individually model bonds and lone pairs, for example), and transferrable between different molecular systems. Such features are exploited to avoid key computational bottlenecks. Since the OLCAO can be doubly occupied, VSVB can access very large problems, and calculations on systems with several hundred atoms are presented.
The variational subspace valence bond method.
Fletcher, Graham D
2015-04-07
The variational subspace valence bond (VSVB) method based on overlapping orbitals is introduced. VSVB provides variational support against collapse for the optimization of overlapping linear combinations of atomic orbitals (OLCAOs) using modified orbital expansions, without recourse to orthogonalization. OLCAO have the advantage of being naturally localized, chemically intuitive (to individually model bonds and lone pairs, for example), and transferrable between different molecular systems. Such features are exploited to avoid key computational bottlenecks. Since the OLCAO can be doubly occupied, VSVB can access very large problems, and calculations on systems with several hundred atoms are presented.
Subspace Signal Processing in Structured Noise
1990-12-01
subspace spanned by a matrix of the form: :0 .. .0 Hrt (2 .1) Such a matrLx is called a Vandermonde matriz when m = n [GVL89]. We follow Demeure [Dem89] in...of Squared Bias. We now consider how he squared bias may bc estimated from the data for a given low rank projection P. Reca!l h!nt ie squared !ia is...mathematics used here is not new. Since it is mostly of a linear algebraic nature, it can be found in such books as the classic Matriz Compufalzons by Golub
Ischemic preconditioning. Experimental facts and clinical perspective.
Post, H; Heusch, G
2002-12-01
Brief periods of non-lethal ischemia and reperfusion render the myocardium more resistant to subsequent ischemia. This adaption occurs in a biphasic pattern: the first being active immediately and lasting for 2-3 hrs (early preconditioning), the second starting at 24 hrs until 72 hrs after the initial ischemia (delayed preconditioning) and requiring genomic activation with de novo protein synthesis. Early preconditioning is more potent than delayed preconditioning in reducing infarct size; delayed preconditioning also attenuates myocardial stunning. Early preconditioning depends on the ischemia-induced release of adenosine and opioids and, to a lesser degree, also bradykinin and prostaglandins. These molecules activate G-protein coupled receptors, initiate the activation of KATP channels and generation of oxygen radicals, and stimulate a series of protein kinases with essential roles for protein kinase C, tyrosine kinases and members of the MAP kinase family. Delayed preconditioning is triggered by a similar sequence of events, but in addition essentially depends on eNOS-derived NO. Both early and pharmacological preconditioning can be pharmacologically mimicked by exogenous adenosine, opioids, NO and activators of protein kinase C. Newly synthetized proteins associated with delayed preconditioning comprise iNOS, COX-2, manganese superoxide dismutase and possibly heat shock proteins. The final mechanism of protection by preconditioning is yet unknown; energy metabolism, KATP channels, the sodium-proton exchanger, stabilisation of the cytoskeleton and volume regulation will be discussed. For ethical reasons, evidence for ischemic preconditioning in humans is hard to provide. Clinical findings that parallel experimental ischemic preconditioning are reduced ST-segment elevation and pain during repetitive PTCA or exercise tests, a better prognosis of patients in whom myocardial infarction was preceded by angina, and reduced serum markers of myocardial necrosis after
A simple subspace approach for speech denoising.
Manfredi, C; Daniello, M; Bruscaglioni, P
2001-01-01
For pathological voices, hoarseness is mainly due to airflow turbulence in the vocal tract and is often referred to as noise. This paper focuses on the enhancement of speech signals that are supposedly degraded by additive white noise. Speech enhancement is performed in the time-domain, by means of a fast and reliable subspace approach. A low-order singular value decomposition (SVD) allows separating the signal and the noise contribution in subsequent data frames of the analysed speech signal. The noise component is thus removed from the signal and the filtered signal is reconstructed along the directions spanned by the eigenvectors associated with the signal subspace eigenvalues only, thus giving enhanced voice quality. This approach was tested on synthetic data, showing higher performance in terms of increased SNR when compared with linear prediction (LP) filtering. It was also successfully applied to real data, from hoarse voices of patients that had undergone partial cordectomisation. The simple structure of the proposed technique allows a real-time implementation, suitable for portable device realisation, as an aid to dysphonic speakers. It could be useful for reducing the effort in speaking, which is closely related to social problems due to awkwardness of voice.
Central subspace dimensionality reduction using covariance operators.
Kim, Minyoung; Pavlovic, Vladimir
2011-04-01
We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
A practical sub-space adaptive filter.
Zaknich, A
2003-01-01
A Sub-Space Adaptive Filter (SSAF) model is developed using, as a basis, the Modified Probabilistic Neural Network (MPNN) and its extension the Tuneable Approximate Piecewise Linear Regression (TAPLR) model. The TAPLR model can be adjusted by a single smoothing parameter continuously from the best piecewise linear model in each sub-space to the best approximately piecewise linear model over the whole data space. A suitable value in between ensures that all neighbouring piecewise linear models merge together smoothly at their boundaries. This model was developed by altering the form of the MPNN, a network used for general nonlinear regression. The MPNNs special structure allows it to be easily used to model a process by appropriately weighting piecewise linear models associated with each of the network's radial basis functions. The model has now been further extended to allow each piecewise linear model section to be adapted separately as new data flows through it. By doing this, the proposed SSAF model represents a learning/filtering method for nonlinear processes that provides one solution to the stability/plasticity dilemma associated with standard adaptive filters.
Classes of Invariant Subspaces for Some Operator Algebras
NASA Astrophysics Data System (ADS)
Hamhalter, Jan; Turilova, Ekaterina
2014-10-01
New results showing connections between structural properties of von Neumann algebras and order theoretic properties of structures of invariant subspaces given by them are proved. We show that for any properly infinite von Neumann algebra M there is an affiliated subspace such that all important subspace classes living on are different. Moreover, we show that can be chosen such that the set of σ-additive measures on subspace classes of are empty. We generalize measure theoretic criterion on completeness of inner product spaces to affiliated subspaces corresponding to Type I factor with finite dimensional commutant. We summarize hitherto known results in this area, discuss their importance for mathematical foundations of quantum theory, and outline perspectives of further research.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Preconditioning for traumatic brain injury
Yokobori, Shoji; Mazzeo, Anna T; Hosein, Khadil; Gajavelli, Shyam; Dietrich, W. Dalton; Bullock, M. Ross
2016-01-01
Traumatic brain injury (TBI) treatment is now focused on the prevention of primary injury and reduction of secondary injury. However, no single effective treatment is available as yet for the mitigation of traumatic brain damage in humans. Both chemical and environmental stresses applied before injury, have been shown to induce consequent protection against post-TBI neuronal death. This concept termed “preconditioning” is achieved by exposure to different pre-injury stressors, to achieve the induction of “tolerance” to the effect of the TBI. However, the precise mechanisms underlying this “tolerance” phenomenon are not fully understood in TBI, and therefore even less information is available about possible indications in clinical TBI patients. In this review we will summarize TBI pathophysiology, and discuss existing animal studies demonstrating the efficacy of preconditioning in diffuse and focal type of TBI. We will also review other non-TBI preconditionng studies, including ischemic, environmental, and chemical preconditioning, which maybe relevant to TBI. To date, no clinical studies exist in this field, and we speculate on possible futureclinical situation, in which pre-TBI preconditioning could be considered. PMID:24323189
A precondition prover for analogy.
Bledsoe, W W
1995-01-01
We describe here a prover PC (precondition) that normally acts as an ordinary theorem prover, but which returns a 'precondition' when it is unable to prove the given formula. If F is the formula attempted to be proved and PC returns the precondition Q, then (Q-->F) is a theorem (that PC can prove). This prover, PC, uses a proof-plan. In its simplest mode, when there is no proof-plan, it acts like ordinary abduction. We show here how this method can be used to derive certain proofs by analogy. To do this, it uses a proof-plan from a given guiding proof to help construct the proof of a similar theorem, by 'debugging' (automatically) that proof-plan. We show here the analogy proofs of a few simple example theorems and one hard pair, Ex4 and Ex4L. The given proof-plan for Ex4 is used by the system to prove automatically Ex4; and that same proof-plan is then used to prove Ex4L, during which the proof-plan is 'debugged' (automatically). These two examples are similar to two other, more difficult, theorems from the theory of resolution, namely GCR (the ground completeness of resolution) and GCLR (the ground completeness of lock resolution). GCR and GCLR have also been handled, in essence, by this system but not completed in all their details.
Preconditioning of the HiFi Code by Linear Discretization on the Gauss-Lobatto-Legendre Nodes
NASA Astrophysics Data System (ADS)
Glasser, A. H.; Lukin, V. S.
2013-10-01
The most challenging aspect of extended MHD simulation is the scaling of computational time as the problem size is scaled up. The use of high-order spectral elements, as in the HiFi code, is useful for handling multiple length scales and strong anisotropy, but detailed code profiling studies show that cpu time increases rapidly with increasing np, the polynomial degree of the spectral elements, due to the cost of Jacobian matrix formation and solution. We have implemented a method of matrix preconditioning based on linear discretization of the Jacobian matrix on the Gauss-Lobatto-Legendre interpolatory nodes. The resulting matrix has much fewer nonzero elements than the full Jacobian and shares the same vector format. The full solution is then obtained by matrix-free Newton-Krylov methods, which converges rapidly because the preconditioner provides an accurate approximation to the full problem. Scaling studies will be presented for a variety of applications.
Preconditioned iterations to calculate extreme eigenvalues
Brand, C.W.; Petrova, S.
1994-12-31
Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.
Faces from sketches: a subspace synthesis approach
NASA Astrophysics Data System (ADS)
Li, Yung-hui; Savvides, Marios
2006-04-01
In real life scenario, we may be interested in face recognition for identification purpose when we only got sketch of the face images, for example, when police tries to identify criminals based on sketches of suspect, which is drawn by artists according to description of witnesses, what they have in hand is a sketch of suspects, and many real face image acquired from video surveillance. So far the state-of-the-art approach toward this problem tries to transform all real face images into sketches and perform recognition on sketch domain. In our approach we propose the opposite which is a better approach; we propose to generate a realistic face image from the composite sketch using a Hybrid subspace method and then build an illumination tolerant correlation filter which can recognize the person under different illumination variations. We show experimental results on our approach on the CMU PIE (Pose Illumination and Expression) database on the effectiveness of our novel approach.
Infrared face recognition using linear subspace analysis
NASA Astrophysics Data System (ADS)
Ge, Wei; Wang, Dawei; Cheng, Yuqi; Zhu, Ming
2009-10-01
Infrared image offers the main advantage over visible image of being invariant to illumination changes for face recognition. In this paper, based on the introduction of main methods of linear subspace analysis, such as Principal Component Analysis (PCA) , Linear Discriminant Analysis(LDA) and Fast Independent Component Analysis (FastICA),the application of these methods to the recognition of infrared face images offered by OTCBVS workshop are investigated, and the advantages and disadvantages are compared. Experimental results show that the combination approach of PCA and LDA leads to better classification performance than single PCA approach or LDA approach, while the FastICA approach leads to the best classification performance with the improvement of nearly 5% compared with the combination approach.
Signal subspace registration of 3D images
NASA Astrophysics Data System (ADS)
Soumekh, Mehrdad
1998-06-01
This paper addresses the problem of fusing the information content of two uncalibrated sensors. This problem arises in registering images of a scene when it is viewed via two different sensory systems, or detecting change in a scene when it is viewed at two different time points by a sensory system (or via two different sensory systems or observation channels). We are concerned with sensory systems which have not only a relative shift, scaling and rotational calibration error, but also an unknown point spread function (that is time-varying for a single sensor, or different for two sensors). By modeling one image in terms of an unknown linear combination of the other image, its powers and their spatially-transformed (shift, rotation and scaling) versions, a signal subspace processing is developed for fusing uncalibrated sensors. Numerical results with realistic 3D magnetic resonance images of a patient with multiple sclerosis, which are acquired at two different time points, are provided.
Robust video hashing via multilinear subspace projections.
Li, Mu; Monga, Vishal
2012-10-01
The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.
Management of Preconditioned Calves and Impacts of Preconditioning.
Hilton, W Mark
2015-07-01
When studying the practice of preconditioning (PC) calves, many factors need to be examined to determine if cow-calf producers should make this investment. Factors such as average daily gain, feed efficiency, available labor, length of the PC period, genetics, and marketing options must be analyzed. The health sales price advantage is an additional benefit in producing and selling PC calves but not the sole determinant of PC's financially feasibility. Studies show that a substantial advantage of PC is the selling of additional pounds at a cost of gain well below the marginal return of producing those additional pounds. Copyright © 2015 Elsevier Inc. All rights reserved.
An alternative subspace approach to EEG dipole source localization.
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-21
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Constrained Low-Rank Representation for Robust Subspace Clustering.
Wang, Jing; Wang, Xiao; Tian, Feng; Liu, Chang Hong; Yu, Hongchuan
2016-10-31
Subspace clustering aims to partition the data points drawn from a union of subspaces according to their underlying subspaces. For accurate semisupervised subspace clustering, all data that have a must-link constraint or the same label should be grouped into the same underlying subspace. However, this is not guaranteed in existing approaches. Moreover, these approaches require additional parameters for incorporating supervision information. In this paper, we propose a constrained low-rank representation (CLRR) for robust semisupervised subspace clustering, based on a novel constraint matrix constructed in this paper. While seeking the low-rank representation of data, CLRR explicitly incorporates supervision information as hard constraints for enhancing the discriminating power of optimal representation. This strategy can be further extended to other state-of-the-art methods, such as sparse subspace clustering. We theoretically prove that the optimal representation matrix has both a block-diagonal structure with clean data and a semisupervised grouping effect with noisy data. We have also developed an efficient optimization algorithm based on alternating the direction method of multipliers for CLRR. Our experimental results have demonstrated that CLRR outperforms existing methods.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Learning Markov Random Walks for robust subspace clustering and estimation.
Liu, Risheng; Lin, Zhouchen; Su, Zhixun
2014-11-01
Markov Random Walks (MRW) has proven to be an effective way to understand spectral clustering and embedding. However, due to less global structural measure, conventional MRW (e.g., the Gaussian kernel MRW) cannot be applied to handle data points drawn from a mixture of subspaces. In this paper, we introduce a regularized MRW learning model, using a low-rank penalty to constrain the global subspace structure, for subspace clustering and estimation. In our framework, both the local pairwise similarity and the global subspace structure can be learnt from the transition probabilities of MRW. We prove that under some suitable conditions, our proposed local/global criteria can exactly capture the multiple subspace structure and learn a low-dimensional embedding for the data, in which giving the true segmentation of subspaces. To improve robustness in real situations, we also propose an extension of the MRW learning model based on integrating transition matrix learning and error correction in a unified framework. Experimental results on both synthetic data and real applications demonstrate that our proposed MRW learning model and its robust extension outperform the state-of-the-art subspace clustering methods.
Structure-Based Subspace Method for Multichannel Blind System Identification
NASA Astrophysics Data System (ADS)
Mayyala, Qadri; Abed-Meraim, Karim; Zerguine, Azzedine
2017-08-01
In this work, a novel subspace-based method for blind identification of multichannel finite impulse response (FIR) systems is presented. Here, we exploit directly the impeded Toeplitz channel structure in the signal linear model to build a quadratic form whose minimization leads to the desired channel estimation up to a scalar factor. This method can be extended to estimate any predefined linear structure, e.g. Hankel, that is usually encountered in linear systems. Simulation findings are provided to highlight the appealing advantages of the new structure-based subspace (SSS) method over the standard subspace (SS) method in certain adverse identification scenarii.
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.
1996-12-31
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
A Newton-Krylov Solver for Implicit Solution of Hydrodynamics in Core Collapse Supernovae
Reynolds, D R; Swesty, F D; Woodward, C S
2008-06-12
This paper describes an implicit approach and nonlinear solver for solution of radiation-hydrodynamic problems in the context of supernovae and proto-neutron star cooling. The robust approach applies Newton-Krylov methods and overcomes the difficulties of discontinuous limiters in the discretized equations and scaling of the equations over wide ranges of physical behavior. We discuss these difficulties, our approach for overcoming them, and numerical results demonstrating accuracy and efficiency of the method.
Solving the time-fractional Schrödinger equation by Krylov projection methods
NASA Astrophysics Data System (ADS)
Garrappa, Roberto; Moret, Igor; Popolizio, Marina
2015-07-01
The time-fractional Schrödinger equation is a fundamental topic in physics and its numerical solution is still an open problem. Here we start from the possibility to express its solution by means of the Mittag-Leffler function; then we analyze some approaches based on the Krylov projection methods to approximate this function; their convergence properties are discussed, together with related issues. Numerical tests are presented to confirm the strength of the approach under investigation.
40 CFR 1065.518 - Engine preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engine and begin the cold soak as described in § 1065.530(a)(1). (2) Hot-start transient cycle... same ones that apply for emission testing: (1) Cold-start transient cycle. Precondition the engine by running at least one hot-start transient cycle. We will precondition your engine by running two...
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
Balian-Low phenomenon for subspace Gabor frames
NASA Astrophysics Data System (ADS)
Gabardo, Jean-Pierre; Han, Deguang
2004-08-01
In this work, the Balian-Low theorem is extended to Gabor (also called Weyl-Heisenberg) frames for subspaces and, more particularly, its relationship with the unique Gabor dual property for subspace Gabor frames is pointed out. To achieve this goal, the subspace Gabor frames which have a unique Gabor dual of type I (resp. type II) are defined and characterized in terms of the Zak transform for the rational parameter case. This characterization is then used to prove the Balian-Low theorem for subspace Gabor frames. Along the same line, the same characterization is used to prove a duality theorem for the unique Gabor dual property which is an analogue of the Ron and Shen duality theorem.
Subspace-based prototyping and classification of chromosome images.
Wu, Qiang; Liu, Zhongmin; Chen, Tiehan; Xiong, Zixiang; Castleman, Kenneth R
2005-09-01
Chromosomes are essential genomic information carriers. Chromosome classification constitutes an important part of routine clinical and cancer cytogenetics analysis. Cytogeneticists perform visual interpretation of banded chromosome images according to the diagrammatic models of various chromosome types known as the ideograms, which mimic artists' depiction of the chromosomes. In this paper, we present a subspace-based approach for automated prototyping and classification of chromosome images. We show that 1) prototype chromosome images can be quantitatively synthesized from a subspace to objectively represent the chromosome images of a given type or population, and 2) the transformation coefficients (or projected coordinate values of sample chromosomes) in the subspace can be utilized as the extracted feature measurements for classification purposes. We examine in particular the formation of three well-known subspaces, namely the ones derived from principal component analysis (PCA), Fisher's linear discriminant analysis, and the discrete cosine transform (DCT). These subspaces are implemented and evaluated for prototyping two-dimensional (2-D) images and for classification of both 2-D images and one-dimensional profiles of chromosomes. Experimental results show that previously unseen prototype chromosome images of high visual quality can be synthesized using the proposed subspace-based method, and that PCA and the DCT significantly outperform the well-known benchmark technique of weighted density distribution functions in classifying 2-D chromosome images.
Robust PCA With Partial Subspace Knowledge
NASA Astrophysics Data System (ADS)
Zhan, Jinchun; Vaswani, Namrata
2015-07-01
In recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a low-rank matrix $\\mathbf{L}$ and a sparse matrix $\\mathbf{S}$ from their sum, $\\mathbf{M}:= \\mathbf{L} + \\mathbf{S}$ and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix $\\mathbf{L}$. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modified-PCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given.
NASA Astrophysics Data System (ADS)
Sanan, Patrick; May, Dave A.; Schenk, Olaf; Bollhöffer, Matthias
2017-04-01
Geodynamics simulations typically involve the repeated solution of saddle-point systems arising from the Stokes equations. These computations often dominate the time to solution. Direct solvers are known for their robustness and ``black box'' properties, yet exhibit superlinear memory requirements and time to solution. More complex multilevel-preconditioned iterative solvers have been very successful for large problems, yet their use can require more effort from the practitioner in terms of setting up a solver and choosing its parameters. We champion an intermediate approach, based on leveraging the power of modern incomplete factorization techniques for indefinite symmetric matrices. These provide an interesting alternative in situations in between the regimes where direct solvers are an obvious choice and those where complex, scalable, iterative solvers are an obvious choice. That is, much like their relatives for definite systems, ILU/ICC-preconditioned Krylov methods and ILU/ICC-smoothed multigrid methods, the approaches demonstrated here provide a useful addition to the solver toolkit. We present results with a simple, PETSc-based, open-source Q2-Q1 (Taylor-Hood) finite element discretization, in 2 and 3 dimensions, with the Stokes and Lamé (linear elasticity) saddle point systems. Attention is paid to cases in which full-operator incomplete factorization gives an improvement in time to solution over direct solution methods (which may not even be feasible due to memory limitations), without the complication of more complex (or at least, less-automatic) preconditioners or smoothers. As an important factor in the relevance of these tools is their availability in portable software, we also describe open-source PETSc interfaces to the factorization routines.
Learning Discriminative Subspaces on Random Contrasts for Image Saliency Analysis.
Fang, Shu; Li, Jia; Tian, Yonghong; Huang, Tiejun; Chen, Xiaowu
2017-05-01
In visual saliency estimation, one of the most challenging tasks is to distinguish targets and distractors that share certain visual attributes. With the observation that such targets and distractors can sometimes be easily separated when projected to specific subspaces, we propose to estimate image saliency by learning a set of discriminative subspaces that perform the best in popping out targets and suppressing distractors. Toward this end, we first conduct principal component analysis on massive randomly selected image patches. The principal components, which correspond to the largest eigenvalues, are selected to construct candidate subspaces since they often demonstrate impressive abilities to separate targets and distractors. By projecting images onto various subspaces, we further characterize each image patch by its contrasts against randomly selected neighboring and peripheral regions. In this manner, the probable targets often have the highest responses, while the responses at background regions become very low. Based on such random contrasts, an optimization framework with pairwise binary terms is adopted to learn the saliency model that best separates salient targets and distractors by optimally integrating the cues from various subspaces. Experimental results on two public benchmarks show that the proposed approach outperforms 16 state-of-the-art methods in human fixation prediction.
A Subspace Approach to Spectral Quantification for MR Spectroscopic Imaging.
Li, Yudu; Lam, Fan; Clifford, Bryan; Liang, Zhi-Pei
2017-08-18
To provide a new approach for incorporating both spatial and spectral priors into the solution of the spectral quantification problem for magnetic resonance spectroscopic imaging (MRSI). A novel signal model is proposed, which represents the spectral distributions of each molecule as a subspace and the entire spectrum as a union-of-subspaces. Based on this model, the spectral quantification can be solved in two steps: a) subspace estimation based on the empirical distributions of the spectral parameters estimated using spectral priors, and b) parameter estimation for the union-of-subspaces model incorporating spatial priors. The proposed method has been evaluated using both simulated and experimental data, producing impressive results. The proposed union-of-subspaces representation of spatiospectral functions provides an effective computational framework for solving the MRSI spectral quantification problem with spatiospectral constraints. The proposed approach transforms how the MRSI spectral quantification problem is solved and enables efficient and effective use of spatiospectral priors to improve parameter estimation. The resulting algorithm is expected to be useful for a wide range of quantitative metabolic imaging studies using MRSI.
Users manual for KSP data-structure-neutral codes implementing Krylov space methods
Gropp, W.; Smith, B.
1994-08-01
The combination of a Krylov space method and a preconditioner is at the heart of most modern numerical codes for the iterative solution of linear systems. This document contains both a users manual and a description of the implementation for the Krylov space methods package KSP included as part of the Portable, Extensible Tools for Scientific computation package (PETSc). PETSc is a large suite of data-structure-neutral libraries for the solution of large-scale problems in scientific computation, in particular on massively parallel computers. The methods in KSP are conjugate gradient method, GMRES, BiCG-Stab, two versions of transpose-free QMR, and others. All of the methods are coded using a common, data-structure-neutral framework and are compatible with the sequential, parallel, and out-of-core solution of linear systems. The codes make no assumptions about the representation of the linear operator; implicitly defined operators (say, calculated using differencing) are fully supported. In addition, unlike all other iterative packages we are aware of, the vector operations are also data-structure neutral. Once certain vector primitives are provided, the same KSP software runs unchanged using any vector storage format. It is not restricted to a few common vector representations. The codes described are actual working codes that run on a large variety of machines including the IBM SP1, Intel DELTA, workstations, networks of workstations, the TMC CM-5, and the CRAY C90. New Krylov space methods may be easily added to the package and used immediately with any application code that has been written using KSP; no changes to the application code are needed.
Suppression of spectral anomalies in SSFP-NMR signal by the Krylov Basis Diagonalization Method
NASA Astrophysics Data System (ADS)
Moraes, Tiago Bueno; Santos, Poliana Macedo; Magon, Claudio Jose; Colnago, Luiz Alberto
2014-06-01
Krylov Basis Diagonalization Method (KBDM) is a numerical procedure used to fit time domain signals as a sum of exponentially damped sinusoids. In this work KBDM is used as an alternative spectral analysis tool, complimentary to Fourier transform. We report results obtained from 13C Nuclear Magnetic Resonance (NMR) by Steady State Free Precession (SSFP) measurements in brucine, C23H26N2O4. Results lead to the conclusion that the KBDM can be successfully applied, mainly because it is not influenced by truncation or phase anomalies, as observed in the Fourier transform spectra.
Krylov vector methods for model reduction and control of flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1992-01-01
Krylov vectors and the concept of parameter matching are combined here to develop model-reduction algorithms for structural dynamics systems. The method is derived for a structural dynamics system described by a second-order matrix differential equation. The reduced models are shown to have a promising application in the control of flexible structures. It can eliminate control and observation spillovers while requiring only the dynamic spillover terms to be considered. A model-order reduction example and a flexible structure control example are provided to show the efficacy of the method.
Solving Nonlinear Solid Mechanics Problems with the Jacobian-Free Newton Krylov Method
J. D. Hales; S. R. Novascone; R. L. Williamson; D. R. Gaston; M. R. Tonks
2012-06-01
The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian is high. These challenges are magnified for multiphysics applications with many coupled variables. Jacobian-free Newton-Krylov (JFNK) methods avoid many of the difficulties associated with the Jacobian by using a finite difference approximation. BISON is a parallel, object-oriented, nonlinear solid mechanics and multiphysics application that leverages JFNK methods. We overview JFNK, outline the capabilities of BISON, and demonstrate the effectiveness of JFNK for solid mechanics and solid mechanics coupled to other PDEs using a series of demonstration problems.
Subspace dynamic mode decomposition for stochastic Koopman analysis
NASA Astrophysics Data System (ADS)
Takeishi, Naoya; Kawahara, Yoshinobu; Yairi, Takehisa
2017-09-01
The analysis of nonlinear dynamical systems based on the Koopman operator is attracting attention in various applications. Dynamic mode decomposition (DMD) is a data-driven algorithm for Koopman spectral analysis, and several variants with a wide range of applications have been proposed. However, popular implementations of DMD suffer from observation noise on random dynamical systems and generate inaccurate estimation of the spectra of the stochastic Koopman operator. In this paper, we propose subspace DMD as an algorithm for the Koopman analysis of random dynamical systems with observation noise. Subspace DMD first computes the orthogonal projection of future snapshots to the space of past snapshots and then estimates the spectra of a linear model, and its output converges to the spectra of the stochastic Koopman operator under standard assumptions. We investigate the empirical performance of subspace DMD with several dynamical systems and show its utility for the Koopman analysis of random dynamical systems.
Physics-based character skinning using multidomain subspace deformations.
Kim, Theodore; James, Doug L
2012-08-01
In this extended version of our Symposium on Computer Animation paper, we describe a domain-decomposition method to simulate articulated deformable characters entirely within a subspace framework. We have added a parallelization and eigendecomposition performance analysis, and several additional examples to the original symposium version. The method supports quasistatic and dynamic deformations, nonlinear kinematics and materials, and can achieve interactive time-stepping rates. To avoid artificial rigidity, or “locking,” associated with coupling low-rank domain models together with hard constraints, we employ penaltybased coupling forces. The multidomain subspace integrator can simulate deformations efficiently, and exploits efficient subspace-only evaluation of constraint forces between rotated domains using a novel Fast Sandwich Transform (FST). Examples are presented for articulated characters with quasistatic and dynamic deformations, and interactive performance with hundreds of fully coupled modes. Using our method, we have observed speedups of between 3 and 4 orders of magnitude over full-rank, unreduced simulations.
Selective control of the symmetric Dicke subspace in trapped ions
Lopez, C. E.; Retamal, J. C.; Solano, E.
2007-09-15
We propose a method of manipulating selectively the symmetric Dicke subspace in the internal degrees of freedom of N trapped ions. We show that the direct access to ionic-motional subspaces, based on a suitable tuning of motion-dependent ac Stark shifts, induces a two-level dynamics involving previously selected ionic Dicke states. In this manner, it is possible to produce, sequentially and unitarily, ionic Dicke states with increasing excitation number. Moreover, we propose a probabilistic technique to produce directly any ionic Dicke state assuming suitable initial conditions.
Globalized Newton-Krylov-Schwarz algorithms and software for parallel implicit CFD.
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.; Mathematics and Computer Science; Old Dominion Univ.; Iowa State Univ.
2000-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz ({psi}NKS) algorithmic framework is presented as a widely applicable answer. This article shows that for the classical problem of three-dimensional transonic Euler flow about an M6 wing, {psi}NKS can simultaneously deliver globalized, asymptotically rapid convergence through adaptive pseudo-transient continuation and Newton's method; reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of {psi}NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. The authors therefore distill several recommendations from their experience and reading of the literature on various algorithmic components of {psi}NKS, and they describe a freely available MPI-based portable parallel software implementation of the solver employed here.
Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD
NASA Technical Reports Server (NTRS)
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.
1998-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.
Mitochondrial preconditioning: a potential neuroprotective strategy.
Correia, Sónia C; Carvalho, Cristina; Cardoso, Susana; Santos, Renato X; Santos, Maria S; Oliveira, Catarina R; Perry, George; Zhu, Xiongwei; Smith, Mark A; Moreira, Paula I
2010-01-01
Mitochondria have long been known as the powerhouse of the cell. However, these organelles are also pivotal players in neuronal cell death. Mitochondrial dysfunction is a prominent feature of chronic brain disorders, including Alzheimer's disease (AD) and Parkinson's disease (PD), and cerebral ischemic stroke. Data derived from morphologic, biochemical, and molecular genetic studies indicate that mitochondria constitute a convergence point for neurodegeneration. Conversely, mitochondria have also been implicated in the neuroprotective signaling processes of preconditioning. Despite the precise molecular mechanisms underlying preconditioning-induced brain tolerance are still unclear, mitochondrial reactive oxygen species generation and mitochondrial ATP-sensitive potassium channels activation have been shown to be involved in the preconditioning phenomenon. This review intends to discuss how mitochondrial malfunction contributes to the onset and progression of cerebral ischemic stroke and AD and PD, two major neurodegenerative disorders. The role of mitochondrial mechanisms involved in the preconditioning-mediated neuroprotective events will be also discussed. Mitochondrial targeted preconditioning may represent a promising therapeutic weapon to fight neurodegeneration.
Cyclic mechanical preconditioning improves engineered muscle contraction.
Moon, Du Geon; Christ, George; Stitzel, Joel D; Atala, Anthony; Yoo, James J
2008-04-01
The inability to engineer clinically relevant functional muscle tissue remains a major hurdle to successful skeletal muscle reconstructive procedures. This article describes an in vitro preconditioning protocol that improves the contractility of engineered skeletal muscle after implantation in vivo. Primary human muscle precursor cells (MPCs) were seeded onto collagen-based acellular tissue scaffolds and subjected to cyclic strain in a computer-controlled bioreactor system. Control constructs (static culture conditions) were run in parallel. Bioreactor preconditioning produced viable muscle tissue constructs with unidirectional orientation within 5 days, and in vitro-engineered constructs were capable of generating contractile responses after 3 weeks of bioreactor preconditioning. MPC-seeded constructs preconditioned in the bioreactor for 1 week were also implanted onto the latissimus dorsi muscle of athymic mice. Analysis of tissue constructs retrieved 1 to 4 weeks postimplantation showed that bioreactor-preconditioned constructs, but not statically cultured control tissues, generated tetanic and twitch contractile responses with a specific force of 1% and 10%, respectively, of that observed on native latissimus dorsi. To our knowledge, this is the largest force generated for tissue-engineered skeletal muscle on an acellular scaffold. This finding has important implications to the application of tissue engineering and regenerative medicine to skeletal muscle replacement and reconstruction.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the
Computational Complexity of Subspace Detectors and Matched Field Processing
Harris, D B
2010-12-01
Subspace detectors implement a correlation type calculation on a continuous (network or array) data stream [Harris, 2006]. The difference between subspace detectors and correlators is that the former projects the data in a sliding observation window onto a basis of template waveforms that may have a dimension (d) greater than one, and the latter projects the data onto a single waveform template. A standard correlation detector can be considered to be a degenerate (d=1) form of a subspace detector. Figure 1 below shows a block diagram for the standard formulation of a subspace detector. The detector consists of multiple multichannel correlators operating on a continuous data stream. The correlation operations are performed with FFTs in an overlap-add approach that allows the stream to be processed in uniform, consecutive, contiguous blocks. Figure 1 is slightly misleading for a calculation of computational complexity, as it is possible, when treating all channels with the same weighting (as shown in the figure), to perform the indicated summations in the multichannel correlators before the inverse FFTs and to get by with a single inverse FFT and overlap add calculation per multichannel correlator. In what follows, we make this simplification.
Decoherence free subspaces of a quantum Markov semigroup
Agredo, Julián; Fagnola, Franco; Rebolledo, Rolando
2014-11-15
We give a full characterisation of decoherence free subspaces of a given quantum Markov semigroup with generator in a generalised Lindbald form which is valid also for infinite-dimensional systems. Our results, extending those available in the literature concerning finite-dimensional systems, are illustrated by some examples.
A subspace approach to learning recurrent features from brain activity.
Gowreesunker, B Vikrham; Tewfik, Ahmed H; Tadipatri, Vijay A; Ashe, James; Pellize, Giuseppe; Gupta, Rahul
2011-06-01
This paper introduces a novel technique to address the instability and time variability challenges associated with brain activity recorded on different days. A critical challenge when working with brain signal activity is the variability in their characteristics when the signals are collected in different sessions separated by a day or more. Such variability is due to the acute and chronic responses of the brain tissue after implantation, variations as the subject learns to optimize performance, physiological changes in a subject due to prior activity or rest periods and environmental conditions. We propose a novel approach to tackle signal variability by focusing on learning subspaces which are recurrent over time. Furthermore, we illustrate how we can use projections on those subspaces to improve classification for an application such as brain-machine interface (BMI). In this paper, we illustrate the merits of finding recurrent subspaces in the context of movement direction decoding using local field potential (LFP). We introduce two methods for using the learned subspaces in movement direction decoding and show a decoding power improvement from 76% to 88% for a particularly unstable subject and consistent decoding across subjects.
Ordered Subspace Clustering With Block-Diagonal Priors.
Wu, Fei; Hu, Yongli; Gao, Junbin; Sun, Yanfeng; Yin, Baocai
2016-12-01
Many application scenarios involve sequential data, but most existing clustering methods do not well utilize the order information embedded in sequential data. In this paper, we study the subspace clustering problem for sequential data and propose a new clustering method, namely ordered sparse clustering with block-diagonal prior (BD-OSC). Instead of using the sparse normalizer in existing sparse subspace clustering methods, a quadratic normalizer for the data sparse representation is adopted to model the correlation among the data sparse coefficients. Additionally, a block-diagonal prior for the spectral clustering affinity matrix is integrated with the model to improve clustering accuracy. To solve the proposed BD-OSC model, which is a complex optimization problem with quadratic normalizer and block-diagonal prior constraint, an efficient algorithm is proposed. We test the proposed clustering method on several types of databases, such as synthetic subspace data set, human face database, video scene clips, motion tracks, and dynamic 3-D face expression sequences. The experiments show that the proposed method outperforms state-of-the-art subspace clustering methods.
Online Categorical Subspace Learning for Sketching Big Data with Misses
NASA Astrophysics Data System (ADS)
Shen, Yanning; Mardani, Morteza; Giannakis, Georgios B.
2017-08-01
With the scale of data growing every day, reducing the dimensionality (a.k.a. sketching) of high-dimensional data has emerged as a task of paramount importance. Relevant issues to address in this context include the sheer volume of data that may consist of categorical samples, the typically streaming format of acquisition, and the possibly missing entries. To cope with these challenges, the present paper develops a novel categorical subspace learning approach to unravel the latent structure for three prominent categorical (bilinear) models, namely, Probit, Tobit, and Logit. The deterministic Probit and Tobit models treat data as quantized values of an analog-valued process lying in a low-dimensional subspace, while the probabilistic Logit model relies on low dimensionality of the data log-likelihood ratios. Leveraging the low intrinsic dimensionality of the sought models, a rank regularized maximum-likelihood estimator is devised, which is then solved recursively via alternating majorization-minimization to sketch high-dimensional categorical data `on the fly.' The resultant procedure alternates between sketching the new incomplete datum and refining the latent subspace, leading to lightweight first-order algorithms with highly parallelizable tasks per iteration. As an extra degree of freedom, the quantization thresholds are also learned jointly along with the subspace to enhance the predictive power of the sought models. Performance of the subspace iterates is analyzed for both infinite and finite data streams, where for the former asymptotic convergence to the stationary point set of the batch estimator is established, while for the latter sublinear regret bounds are derived for the empirical cost. Simulated tests with both synthetic and real-world datasets corroborate the merits of the novel schemes for real-time movie recommendation and chess-game classification.
Linear unmixing using endmember subspaces and physics based modeling
NASA Astrophysics Data System (ADS)
Gillis, David; Bowles, Jeffrey; Ientilucci, Emmett J.; Messinger, David W.
2007-09-01
One of the biggest issues with the Linear Mixing Model (LMM) is that it is implicitly assumed that each of the individual material components throughout the scene may be described using a single dimension (e.g. an endmember vector). In reality, individual pixels corresponding to the same general material class can exhibit a large degree of variation within a given scene. This is especially true in broad background classes such as forests, where the single dimension assumption clearly fails. In practice, the only way to account for the multidimensionality of the class is to choose multiple (very similar) endmembers, each of which represents some part of the class. To address these issues, we introduce the endmember subgroup model, which generalizes the notion of an 'endmember vector' to an 'endmember subspace'. In this model, spectra in a given hyperspectral scene are decomposed as a sum of constituent materials; however, each material is represented by some multidimensional subspace (instead of a single vector). The dimensionality of the subspace will depend on the within-class variation seen in the image. The endmember subgroups can be determined automatically from the data, or can use physics-based modeling techniques to include 'signature subspaces', which are included in the endmember subgroups. In this paper, we give an overview of the subgroup model; discuss methods for determining the endmember subgroups for a given image, and present results showing how the subgroup model improves upon traditional single endmember linear mixing. We also include results that use the 'signature subspace' approach to identifying mixed-pixel targets in HYDICE imagery.
Minimal residual method stronger than polynomial preconditioning
Faber, V.; Joubert, W.; Knill, E.
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Preconditioning the Helmholtz Equation for Rigid Ducts
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1998-01-01
An innovative hyperbolic preconditioning technique is developed for the numerical solution of the Helmholtz equation which governs acoustic propagation in ducts. Two pseudo-time parameters are used to produce an explicit iterative finite difference scheme. This scheme eliminates the large matrix storage requirements normally associated with numerical solutions to the Helmholtz equation. The solution procedure is very fast when compared to other transient and steady methods. Optimization and an error analysis of the preconditioning factors are present. For validation, the method is applied to sound propagation in a 2D semi-infinite hard wall duct.
Subspace-based interference removal methods for a multichannel biomagnetic sensor array
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Nagarajan, Srikantan S.
2017-10-01
Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
NASA Astrophysics Data System (ADS)
Hayes, Charles E.; McClellan, James H.; Scott, Waymond R.; Kerr, Andrew J.
2016-05-01
This work introduces two advances in wide-band electromagnetic induction (EMI) processing: a novel adaptive matched filter (AMF) and matched subspace detection methods. Both advances make use of recent work with a subspace SVD approach to separating the signal, soil, and noise subspaces of the frequency measurements The proposed AMF provides a direct approach to removing the EMI self-response while improving the signal to noise ratio of the data. Unlike previous EMI adaptive downtrack filters, this new filter will not erroneously optimize the EMI soil response instead of the EMI target response because these two responses are projected into separate frequency subspaces. The EMI detection methods in this work elaborate on how the signal and noise subspaces in the frequency measurements are ideal for creating the matched subspace detection (MSD) and constant false alarm rate matched subspace detection (CFAR) metrics developed by Scharf The CFAR detection metric has been shown to be the uniformly most powerful invariant detector.
40 CFR 1066.816 - Vehicle preconditioning for FTP testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vehicle preconditioning for FTP testing. 1066.816 Section 1066.816 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... § 1066.816 Vehicle preconditioning for FTP testing. Precondition the test vehicle before the FTP...
Subspace differential coexpression analysis: problem definition and a general approach.
Fang, Gang; Kuang, Rui; Pandey, Gaurav; Steinbach, Michael; Myers, Chad L; Kumar, Vipin
2010-01-01
In this paper, we study methods to identify differential coexpression patterns in case-control gene expression data. A differential coexpression pattern consists of a set of genes that have substantially different levels of coherence of their expression profiles across the two sample-classes, i.e., highly coherent in one class, but not in the other. Biologically, a differential coexpression patterns may indicate the disruption of a regulatory mechanism possibly caused by disregulation of pathways or mutations of transcription factors. A common feature of all the existing approaches for differential coexpression analysis is that the coexpression of a set of genes is measured on all the samples in each of the two classes, i.e., over the full-space of samples. Hence, these approaches may miss patterns that only cover a subset of samples in each class, i.e., subspace patterns, due to the heterogeneity of the subject population and disease causes. In this paper, we extend differential coexpression analysis by defining a subspace differential coexpression pattern, i.e., a set of genes that are coexpressed in a relatively large percent of samples in one class, but in a much smaller percent of samples in the other class. We propose a general approach based upon association analysis framework that allows exhaustive yet efficient discovery of subspace differential coexpression patterns. This approach can be used to adapt a family of biclustering algorithms to obtain their corresponding differential versions that can directly discover differential coexpression patterns. Using a recently developed biclustering algorithm as illustration, we perform experiments on cancer datasets which demonstrates the existence of subspace differential coexpression patterns. Permutation tests demonstrate the statistical significance for a large number of discovered subspace patterns, many of which can not be discovered if they are measured over all the samples in each of the classes
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
A Newton-Krylov solver for fast spin-up of online ocean tracers
NASA Astrophysics Data System (ADS)
Lindsay, Keith
2017-01-01
We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.
SKRYN: A fast semismooth-Krylov-Newton method for controlling Ising spin systems
NASA Astrophysics Data System (ADS)
Ciaramella, G.; Borzì, A.
2015-05-01
The modeling and control of Ising spin systems is of fundamental importance in NMR spectroscopy applications. In this paper, two computer packages, ReHaG and SKRYN, are presented. Their purpose is to set-up and solve quantum optimal control problems governed by the Liouville master equation modeling Ising spin-1/2 systems with pointwise control constraints. In particular, the MATLAB package ReHaG allows to compute a real matrix representation of the master equation. The MATLAB package SKRYN implements a new strategy resulting in a globalized semismooth matrix-free Krylov-Newton scheme. To discretize the real representation of the Liouville master equation, a norm-preserving modified Crank-Nicolson scheme is used. Results of numerical experiments demonstrate that the SKRYN code is able to provide fast and accurate solutions to the Ising spin quantum optimization problem.
Preconditioning matrices for Chebyshev derivative operators
NASA Technical Reports Server (NTRS)
Rothman, Ernest E.
1986-01-01
The problem of preconditioning the matrices arising from pseudo-spectral Chebyshev approximations of first order operators is considered in both one and two dimensions. In one dimension a preconditioner represented by a full matrix which leads to preconditioned eigenvalues that are real, positive, and lie between 1 and pi/2, is already available. Since there are cases in which it is not computationally convenient to work with such a preconditioner, a large number of preconditioners were studied which were more sparse (in particular three and four diagonal matrices). The eigenvalues of such preconditioned matrices are compared. The results were applied to the problem of finding the steady state solution to an equation of the type u sub t = u sub x + f, where the Chebyshev collocation is used for the spatial variable and time discretization is performed by the Richardson method. In two dimensions different preconditioners are proposed for the matrix which arises from the pseudo-spectral discretization of the steady state problem. Results are given for the CPU time and the number of iterations using a Richardson iteration method for the unpreconditioned and preconditioned cases.
Revealing Preconditions for Trustful Collaboration in CSCL
ERIC Educational Resources Information Center
Gerdes, Anne
2010-01-01
This paper analyses preconditions for trust in virtual learning environments. The concept of trust is discussed with reference to cases reporting trust in cyberspace and through a philosophical clarification holding that trust in the form of self-surrender is a common characteristic of all human co-existence. In virtual learning environments,…
Preconditioning and tolerance against cerebral ischaemia
Dirnagl, Ulrich; Becker, Kyra; Meisel, Andreas
2009-01-01
Neuroprotection and brain repair in patients after acute brain damage are still major unfulfilled medical needs. Pharmacological treatments are either ineffective or confounded by adverse effects. Consequently, endogenous mechanisms by which the brain protects itself against noxious stimuli and recovers from damage are being studied. Research on preconditioning, also known as induced tolerance, over the past decade has resulted in various promising strategies for the treatment of patients with acute brain injury. Several of these strategies are being tested in randomised clinical trials. Additionally, research into preconditioning has led to the idea of prophylactically inducing protection in patients such as those undergoing brain surgery and those with transient ischaemic attack or subarachnoid haemorrhage who are at high risk of brain injury in the near future. In this Review, we focus on the clinical issues relating to preconditioning and tolerance in the brain; specifically, we discuss the clinical situations that might benefit from such procedures. We also discuss whether preconditioning and tolerance occur naturally in the brain and assess the most promising candidate strategies that are being investigated. PMID:19296922
Health and Nutrition: Preconditions for Educational Achievement.
ERIC Educational Resources Information Center
Negussie, Birgit
This paper discusses the importance of maternal and infant health for children's educational achievement. Education, health, and nutrition are so closely related that changes in one causes changes in the others. Improvement of maternal and preschooler health and nutrition is a precondition for improved educational achievement. Although parental…
Neuroprotective Effects of Peptides during Ischemic Preconditioning.
Zarubina, I V; Shabanov, P D
2016-02-01
Experiments on rats showed that neurospecific protein preparations reduce the severity of neurological deficit, restore the structure of individual behavior of the animals with different hypoxia tolerance, and exert antioxidant action during chronic ischemic damage to the brain unfolding during the early and late phases of ischemic preconditioning.
Health and Nutrition: Preconditions for Educational Achievement.
ERIC Educational Resources Information Center
Negussie, Birgit
This paper discusses the importance of maternal and infant health for children's educational achievement. Education, health, and nutrition are so closely related that changes in one causes changes in the others. Improvement of maternal and preschooler health and nutrition is a precondition for improved educational achievement. Although parental…
LogDet Rank Minimization with Application to Subspace Clustering.
Kang, Zhao; Peng, Chong; Cheng, Jie; Cheng, Qiang
2015-01-01
Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.
Mining visual collocation patterns via self-supervised subspace learning.
Yuan, Junsong; Wu, Ying
2012-04-01
Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.
Smooth local subspace projection for nonlinear noise reduction
Chelidze, David
2014-03-15
Many nonlinear or chaotic time series exhibit an innate broad spectrum, which makes noise reduction difficult. Local projective noise reduction is one of the most effective tools. It is based on proper orthogonal decomposition (POD) and works for both map-like and continuously sampled time series. However, POD only looks at geometrical or topological properties of data and does not take into account the temporal characteristics of time series. Here, we present a new smooth projective noise reduction method. It uses smooth orthogonal decomposition (SOD) of bundles of reconstructed short-time trajectory strands to identify smooth local subspaces. Restricting trajectories to these subspaces imposes temporal smoothness on the filtered time series. It is shown that SOD-based noise reduction significantly outperforms the POD-based method for continuously sampled noisy time series.
Condition Number Estimation of Preconditioned Matrices
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
Characterizing Earthquake Clusters in Oklahoma Using Subspace Detectors
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Benz, H.; Aster, R. C.; McNamara, D. E.; Myers, E. K.
2014-12-01
Subspace detection is a powerful and adaptive tool for continuously detecting low signal to noise seismic signals. Subspace detectors improve upon simple cross-correlation/matched filtering techniques by moving beyond the use of a single waveform template to the use of multiple orthogonal waveform templates that effectively span the signals from all previously identified events within a data set. Subspace detectors are particularly useful in event scenarios where a spatially limited source distribution produces earthquakes with highly similar waveforms. In this context, the methodology has been successfully deployed to identify low-frequency earthquakes within non-volcanic tremor, to characterize earthquakes swarms above magma bodies, and for detailed characterization of aftershock sequences. Here we apply a subspace detection methodology to characterize recent earthquakes clusters in Oklahoma. Since 2009, the state has experienced an unprecedented increase in seismicity, which has been attributed by others to recent expansion in deep wastewater injection well activity. Within the last few years, 99% of increased Oklahoma earthquake activity has occurred within 15 km of a Class II injection well. We analyze areas of dense seismic activity in central Oklahoma and construct more complete catalogues for analysis. For a typical cluster, we are able to achieve catalog completeness to near or below magnitude 1 and to continuously document seismic activity for periods of 6 months or more. Our catalog can more completely characterize these clusters in time and space with event numbers, magnitudes, b-values, energy, locations, etc. This detailed examination of swarm events should lead to a better understanding of time varying earthquake processes and hazards in the state of Oklahoma.
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world...Bian, Student Member, IEEE, and Hamid Krim, Fellow, IEEE Abstract The success of sparse models in computer vision and machine learning is due to the
Entanglement of random subspaces via the Hastings bound
Fukuda, Motohisa; King, Christopher
2010-04-15
Recently, Hastings ['A counterexample to additivity of minimum output entropy', Nat. Phys. 5, 255 (2009); e-print arXiv:0809.3972v3] proved the existence of random unitary channels, which violate the additivity conjecture. In this paper, we use Hastings' method to derive new bounds for the entanglement of random subspaces of bipartite systems. As an application we use these bounds to prove the existence of nonunital channels, which violate additivity of minimal output entropy.
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
Relations Among Some Low-Rank Subspace Recovery Models.
Zhang, Hongyang; Lin, Zhouchen; Zhang, Chao; Gao, Junbin
2015-09-01
Recovering intrinsic low-dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, a lot of work has modeled subspace recovery as low-rank minimization problems. We find that some representative models, such as robust principal component analysis (R-PCA), robust low-rank representation (R-LRR), and robust latent low-rank representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low-rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find globally optimal solutions to these low-rank models at an overwhelming probability, although these models are nonconvex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low-complexity randomized algorithms, for example, our novel l2,1 filtering algorithm, to R-PCA. Although for the moment the formal proof of our l2,1 filtering algorithm is not yet available, experiments verify the advantages of our algorithm over other state-of-the-art methods based on the alternating direction method.
A basis in an invariant subspace of analytic functions
Krivosheev, A S; Krivosheeva, O A
2013-12-31
The existence problem for a basis in a differentiation-invariant subspace of analytic functions defined in a bounded convex domain in the complex plane is investigated. Conditions are found for the solvability of a certain special interpolation problem in the space of entire functions of exponential type with conjugate diagrams lying in a fixed convex domain. These underlie sufficient conditions for the existence of a basis in the invariant subspace. This basis consists of linear combinations of eigenfunctions and associated functions of the differentiation operator, whose exponents are combined into relatively small clusters. Necessary conditions for the existence of a basis are also found. Under a natural constraint on the number of points in the groups, these coincide with the sufficient conditions. That is, a criterion is found under this constraint that a basis constructed from relatively small clusters exists in an invariant subspace of analytic functions in a bounded convex domain in the complex plane. Bibliography: 25 titles.
Zhu, Xiaofeng; Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2016-03-01
The high feature-dimension and low sample-size problem is one of the major challenges in the study of computer-aided Alzheimer's disease (AD) diagnosis. To circumvent this problem, feature selection and subspace learning have been playing core roles in the literature. Generally, feature selection methods are preferable in clinical applications due to their ease for interpretation, but subspace learning methods can usually achieve more promising results. In this paper, we combine two different methodological approaches to discriminative feature selection in a unified framework. Specifically, we utilize two subspace learning methods, namely, linear discriminant analysis and locality preserving projection, which have proven their effectiveness in a variety of fields, to select class-discriminative and noise-resistant features. Unlike previous methods in neuroimaging studies that mostly focused on a binary classification, the proposed feature selection method is further applicable for multiclass classification in AD diagnosis. Extensive experiments on the Alzheimer's disease neuroimaging initiative dataset showed the effectiveness of the proposed method over other state-of-the-art methods.
Performance analysis of subspace-based parameter estimation algorithms
NASA Astrophysics Data System (ADS)
Vaccaro, Richard J.
1990-06-01
New perturbation formulas were developed for signal and orthogonal subspaces which are estimated from a noisy data matrix. These formulas are: (1) based on a finite amount of data; (2) derived under the assumption of high signal-to-noise ratio; and (3) applicable to arrays of arbitrary geometry, and they provide a common foundation for all analyses. A number of array processing algorithms were analyzed which are classified as follows: (1) Signal subspace algorithms: ESPRIT, state-space realization (including TAM), and Matrix Pencil; (2) orthogonal subspace algorithms: MUSIC and Min-Norm. Analytical variance formulas were developed for the case in which estimates are obtained by searching for the extrema of a function (used with arbitrary array geometry), as well as the case in which estimates are obtained by rooting a polynomial or finding the eigenvalues of a matrix (used with a uniform line array geometry). In addition, improvements were developed for a state-space algorithm for frequency-wavenumber (2-D) estimation. A procedure to pair individual frequency and wavenumber estimates was given, and it was also shown how a 2-D forward-backward data matrix can be used to improve the performance of the state-space approach.
Classification of Polarimetric SAR Image Based on the Subspace Method
NASA Astrophysics Data System (ADS)
Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.
2013-07-01
Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.
Recursive stochastic subspace identification for structural parameter estimation
NASA Astrophysics Data System (ADS)
Chang, C. C.; Li, Z.
2009-03-01
Identification of structural parameters under ambient condition is an important research topic for structural health monitoring and damage identification. This problem is especially challenging in practice as these structural parameters could vary with time under severe excitation. Among the techniques developed for this problem, the stochastic subspace identification (SSI) is a popular time-domain method. The SSI can perform parametric identification for systems with multiple outputs which cannot be easily done using other time-domain methods. The SSI uses the orthogonal-triangular decomposition (RQ) and the singular value decomposition (SVD) to process measured data, which makes the algorithm efficient and reliable. The SSI however processes data in one batch hence cannot be used in an on-line fashion. In this paper, a recursive SSI method is proposed for on-line tracking of time-varying modal parameters for a structure under ambient excitation. The Givens rotation technique, which can annihilate the designated matrix elements, is used to update the RQ decomposition. Instead of updating the SVD, the projection approximation subspace tracking technique which uses an unconstrained optimization technique to track the signal subspace is employed. The proposed technique is demonstrated on the Phase I ASCE benchmark structure. Results show that the technique can identify and track the time-varying modal properties of the building under ambient condition.
Random Subspace Aggregation for Cancer Prediction with Gene Expression Profiles
Yuan, Xiguo; Zhang, Junying
2016-01-01
Background. Precisely predicting cancer is crucial for cancer treatment. Gene expression profiles make it possible to analyze patterns between genes and cancers on the genome-wide scale. Gene expression data analysis, however, is confronted with enormous challenges for its characteristics, such as high dimensionality, small sample size, and low Signal-to-Noise Ratio. Results. This paper proposes a method, termed RS_SVM, to predict gene expression profiles via aggregating SVM trained on random subspaces. After choosing gene features through statistical analysis, RS_SVM randomly selects feature subsets to yield random subspaces and training SVM classifiers accordingly and then aggregates SVM classifiers to capture the advantage of ensemble learning. Experiments on eight real gene expression datasets are performed to validate the RS_SVM method. Experimental results show that RS_SVM achieved better classification accuracy and generalization performance in contrast with single SVM, K-nearest neighbor, decision tree, Bagging, AdaBoost, and the state-of-the-art methods. Experiments also explored the effect of subspace size on prediction performance. Conclusions. The proposed RS_SVM method yielded superior performance in analyzing gene expression profiles, which demonstrates that RS_SVM provides a good channel for such biological data. PMID:27999797
Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment
Khuwaileh, B.A. Abdel-Khalik, H.S.
2015-01-15
Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.
Relative perturbation theory: (II) Eigenspace and singular subspace variations
Li, R.-C.
1996-01-20
The classical perturbation theory for Hermitian matrix enigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciporcals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invariant subspaces corresponding to clustered eigenvalues or clustered singular values of much smaller magnitudes than the norms of matrices under considerations when some of these clustered eigenvalues ro clustered singular values are perfectly relatively distinguishable from the rest. This paper considers how eigenvalues of a Hermitian matrix A change when it is perturbed to {tilde A}= D{sup {asterisk}}AD and how singular values of a (nonsquare) matrix B change whenit is perturbed to {tilde B}=D{sub 1}{sup {asterisk}}BD{sub 2}, where D, D{sub 1}, and D{sub 2} are assumed to be close to identity matrices of suitable dimensions, or either D{sub 1} or D{sub 2} close to some unitary matrix. It is proved that under these kinds of perturbations, the change of invarient subspaces are proportional to reciprocals of relative gaps between subsets of spectra or subsets of singular values. We have been able to extend well-known Davis-Kahan sin {theta} theorems and Wedin sin {theta} theorems. As applications, we obtained bounds for perturbations of graded matrices.
Improved Stochastic Subspace System Identification for Structural Health Monitoring
NASA Astrophysics Data System (ADS)
Chang, Chia-Ming; Loh, Chin-Hsiung
2015-07-01
Structural health monitoring acquires structural information through numerous sensor measurements. Vibrational measurement data render the dynamic characteristics of structures to be extracted, in particular of the modal properties such as natural frequencies, damping, and mode shapes. The stochastic subspace system identification has been recognized as a power tool which can present a structure in the modal coordinates. To obtain qualitative identified data, this tool needs to spend computational expense on a large set of measurements. In study, a stochastic system identification framework is proposed to improve the efficiency and quality of the conventional stochastic subspace system identification. This framework includes 1) measured signal processing, 2) efficient space projection, 3) system order selection, and 4) modal property derivation. The measured signal processing employs the singular spectrum analysis algorithm to lower the noise components as well as to present a data set in a reduced dimension. The subspace is subsequently derived from the data set presented in a delayed coordinate. With the proposed order selection criteria, the number of structural modes is determined, resulting in the modal properties. This system identification framework is applied to a real-world bridge for exploring the feasibility in real-time applications. The results show that this improved system identification method significantly decreases computational time, while qualitative modal parameters are still attained.
NASA Astrophysics Data System (ADS)
Weston, Brian; Nourgaliev, Robert; Delplanque, Jean-Pierre; Anderson, Andy
2016-11-01
The numerical simulation of flows associated with metal additive manufacturing processes such as selective laser melting and other laser-induced phase change applications present new challenges. Specifically, these flows require a fully compressible formulation since rapid density variations occur due to laser-induced melting and solidification of metal powder. We investigate the preconditioning for a recently developed all-speed compressible Navier-Stokes solver that addresses such challenges. The equations are discretized with a reconstructed Discontinuous Galerkin method and integrated in time with fully implicit discretization schemes. The resulting set of non-linear and linear equations are solved with a robust Newton-Krylov (NK) framework. To enable convergence of the highly ill-conditioned linearized systems, we employ a physics-based operator split preconditioner (PBP), utilizing a robust Schur complement technique. We investigate different options of splitting the physics (field) blocks as well as different block solvers on the reduced preconditioning matrix. We demonstrate that our NK-PBP framework is scalable and converges for high CFL/Fourier numbers on classic problems in fluid dynamics as well as for laser-induced phase change problems.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less
Semi-supervised subspace learning for Mumford-Shah model based texture segmentation.
Law, Yan Nei; Lee, Hwee Kuan; Yip, Andy M
2010-03-01
We propose a novel image segmentation model which incorporates subspace clustering techniques into a Mumford-Shah model to solve texture segmentation problems. While the natural unsupervised approach to learn a feature subspace can easily be trapped in a local solution, we propose a novel semi-supervised optimization algorithm that makes use of information derived from both the intermediate segmentation results and the regions-of-interest (ROI) selected by the user to determine the optimal subspaces of the target regions. Meanwhile, these subspaces are embedded into a Mumford-Shah objective function so that each segment of the optimal partition is homogeneous in its own subspace. The method outperforms standard Mumford-Shah models since it can separate textures which are less separated in the full feature space. Experimental results are presented to confirm the usefulness of subspace clustering in texture segmentation.
Physiology and pharmacology of myocardial preconditioning.
Raphael, Jacob
2010-03-01
Perioperative myocardial ischemia and infarction are not only major sources of morbidity and mortality in patients undergoing surgery but also important causes of prolonged hospital stay and resource utilization. Ischemic and pharmacological preconditioning and postconditioning have been known for more than two decades to provide protection against myocardial ischemia and reperfusion and limit myocardial infarct size in many experimental animal models, as well as in clinical studies (1-3). This paper will review the physiology and pharmacology of ischemic and drug-induced preconditioning and postconditioning of the myocardium with special emphasis on the mechanisms by which volatile anesthetics provide myocardial protection. Insights gained from animal and clinical studies will be presented and reviewed and recommendations for the use of perioperative anesthetics and medications will be given.
M-step preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
Preconditioned conjugate gradient methods for solving sparse symmetric and positive finite systems of linear equations are described. Necessary and sufficient conditions are given for when these preconditioners can be used and an analysis of their effectiveness is given. Efficient computer implementations of these methods are discussed and results on the CYBER 203 and the Finite Element Machine under construction at NASA Langley Research Center are included.
Silymarin and its constituents in cardiac preconditioning.
Zholobenko, A; Modriansky, M
2014-09-01
Silymarin, a standardised extract of Silybum marianum (milk thistle), comprises mainly of silybin, with dehydrosilybin (DHSB), quercetin, taxifolin, silychristin and a number of other compounds which are known to possess a range of salutary effects. Indeed, there is evidence for their role in reducing tumour growth, preventing liver toxicity, and protecting a number of organs against ischemic damage. The hepatoprotective effects of silymarin, especially in preventing Amanita and alcohol intoxication induced damage to the liver, are a well established fact. Likewise, there is weighty evidence that silymarin possesses antimicrobial and anticancer activities. Additionally, it has emerged that in animal models, silymarin can protect the heart, brain, liver and kidneys against ischemia reperfusion injury, probably by preconditioning. The mechanisms of preconditioning are, in general, well studied, especially in the heart. On the other hand, the mechanism by which silymarin protects the heart from ischemia remains largely unexplored. This review, therefore, focuses on evaluating existing studies on silymarin induced cardioprotection in the context of the established mechanisms of preconditioning. Copyright © 2014. Published by Elsevier B.V.
On polynomial preconditioning for indefinite Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1989-01-01
The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.
Lck activation mediates neuroprotection during ischemic preconditioning
Bae, Ok-Nam; Rajanikant, Krishnamurthy; Min, Jiangyong; Smith, Jeremy; Baek, Seung-Hoon; Serfozo, Kelsey; Hejabian, Siamak; Lee, Ki Yong; Kassab, Mounzer; Majid, Arshad
2012-01-01
The molecular mechanisms underlying preconditioning (PC), a powerful endogenous neuroprotective phenomenon, remain to be fully elucidated. Once identified, these endogenous mechanisms could be manipulated for therapeutic gain. We investigated whether Lck, a member of the Src kinases family, mediates PC. We employed both in vitro primary cortical neurons and in vivo mouse cerebral focal ischemia models of preconditioning, cellular injury and neuroprotection. Genetically engineered mice deficient in LcK, gene silencing using siRNA and pharmacological approaches were used. Cortical neurons preconditioned with sub-lethal exposure to NMDA or oxygen glucose deprivation (OGD) exhibited enhanced Lck kinase activity, and were resistant to injury on subsequent exposure to lethal levels of NMDA or OGD. Lck gene silencing using siRNA abolished tolerance against both stimuli. Lck−/− mice or neurons isolated from Lck−/− mice did not exhibit PC-induced tolerance. An Lck antagonist administered to wild-type mice significantly attenuated the neuroprotective effect of PC in the mouse focal ischemia model. Using pharmacological and gene silencing strategies, we also showed that PKCε is an upstream regulator of Lck, and Fyn is a downstream target of Lck. We have discovered that Lck plays an essential role in PC in both cellular and animal models of stroke. Our data also show that the PKCε-Lck-Fyn axis is a key mediator of PC. These findings provide new opportunities for stroke therapy development. PMID:22623673
Video background tracking and foreground extraction via L1-subspace updates
NASA Astrophysics Data System (ADS)
Pierantozzi, Michele; Liu, Ying; Pados, Dimitris A.; Colonnese, Stefania
2016-05-01
We consider the problem of online foreground extraction from compressed-sensed (CS) surveillance videos. A technically novel approach is suggested and developed by which the background scene is captured by an L1- norm subspace sequence directly in the CS domain. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to outliers, disturbances, and rank selection. Subtraction of the L1-subspace tracked background leads then to effective foreground/moving objects extraction. Experimental studies included in this paper illustrate and support the theoretical developments.
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Updating Hawaii Seismicity Catalogs with Systematic Relocations and Subspace Detectors
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Matoza, R. S.; Thelen, W. A.
2015-12-01
We continue the systematic relocation of seismicity recorded in Hawai`i by the United States Geological Survey's (USGS) Hawaiian Volcano Observatory (HVO), with interests in adding to the products derived from the relocated seismicity catalogs published by Matoza et al., (2013, 2014). Another goal of this effort is updating the systematically relocated HVO catalog since 2009, when earthquake cataloging at HVO was migrated to the USGS Advanced National Seismic System Quake Management Software (AQMS) systems. To complement the relocation analyses of the catalogs generated from traditional STA/LTA event-triggered and analyst-reviewed approaches, we are also experimenting with subspace detection of events at Kilauea as a means to augment AQMS procedures for cataloging seismicity to lower magnitudes and during episodes of elevated volcanic activity. Our earlier catalog relocations have demonstrated the ability to define correlated or repeating families of earthquakes and provide more detailed definition of seismogenic structures, as well as the capability for improved automatic identification of diverse volcanic seismic sources. Subspace detectors have been successfully applied to cataloging seismicity in situations of low seismic signal-to-noise and have significantly increased catalog sensitivity to lower magnitude thresholds. We anticipate similar improvements using event subspace detections and cataloging of volcanic seismicity that include improved discrimination among not only evolving earthquake sequences but also diverse volcanic seismic source processes. Matoza et al., 2013, Systematic relocation of seismicity on Hawai`i Island from 1992 to 2009 using waveform cross correlation and cluster analysis, J. Geophys. Res., 118, 2275-2288, doi:10.1002/jgrb.580189 Matoza et al., 2014, High-precision relocation of long-period events beneath the summit region of Kīlauea Volcano, Hawai`i, from 1986 to 2009, Geophys. Res. Lett., 41, 3413-3421, doi:10.1002/2014GL059819
Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.
Robinson, David
2014-12-09
A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.
r-principal subspace for driver cognitive state classification.
Almahasneh, Hossam; Kamel, Nidal; Walter, Nicolas; Malik, Aamir Saeed
2015-01-01
Using EEG signals, a novel technique for driver cognitive state assessment is presented, analyzed and experimentally verified. The proposed technique depends on the singular value decomposition (SVD) in finding the distributed energy of the EEG data matrix A in the direction of the r-principal subspace. This distribution is unique and sensitive to the changes in the cognitive state of the driver due to external stimuli, so it is used as a set of features for classification. The proposed technique is tested with 42 subjects using 128 EEG channels and the results show significant improvements in terms of accuracy, specificity, sensitivity, and false detection in comparison to other recently proposed techniques.
Phenotype recognition with combined features and random subspace classifier ensemble.
Zhang, Bailing; Pham, Tuan D
2011-04-30
Automated, image based high-content screening is a fundamental tool for discovery in biological science. Modern robotic fluorescence microscopes are able to capture thousands of images from massively parallel experiments such as RNA interference (RNAi) or small-molecule screens. As such, efficient computational methods are required for automatic cellular phenotype identification capable of dealing with large image data sets. In this paper we investigated an efficient method for the extraction of quantitative features from images by combining second order statistics, or Haralick features, with curvelet transform. A random subspace based classifier ensemble with multiple layer perceptron (MLP) as the base classifier was then exploited for classification. Haralick features estimate image properties related to second-order statistics based on the grey level co-occurrence matrix (GLCM), which has been extensively used for various image processing applications. The curvelet transform has a more sparse representation of the image than wavelet, thus offering a description with higher time frequency resolution and high degree of directionality and anisotropy, which is particularly appropriate for many images rich with edges and curves. A combined feature description from Haralick feature and curvelet transform can further increase the accuracy of classification by taking their complementary information. We then investigate the applicability of the random subspace (RS) ensemble method for phenotype classification based on microscopy images. A base classifier is trained with a RS sampled subset of the original feature set and the ensemble assigns a class label by majority voting. Experimental results on the phenotype recognition from three benchmarking image sets including HeLa, CHO and RNAi show the effectiveness of the proposed approach. The combined feature is better than any individual one in the classification accuracy. The ensemble model produces better classification
Software for computing eigenvalue bounds for iterative subspace matrix methods
NASA Astrophysics Data System (ADS)
Shepard, Ron; Minkoff, Michael; Zhou, Yunkai
2005-07-01
This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of
Subspace-Based Bayesian Blind Source Separation for Hyperspectral Imagery
2009-12-01
Subspace-based Bayesian blind source separation for hyperspectral imagery Nicolas Dobigeon∗, Saı̈d Moussaoui †, Martial Coulon∗, Jean-Yves Tourneret... Moussaoui , J.-Y. Tourneret, and C. Carteret, “Bayesian separation of spectral sources under non-negativity and full additivity constraints,” Sig. Proc...vol. 89, no. 12, pp. 2657–2669, Dec. 2009. [3] S. Moussaoui , D. Brie, A. Mohammad-Djafari, and C. Carteret, “Sepa- ration of non-negative mixture
Preconditioned minimal residual methods for Chebyshev spectral calculations
NASA Technical Reports Server (NTRS)
Canuto, C.; Quarteroni, A.
1985-01-01
The problem of preconditioning the pseudospectral Chebyshev approximation of an elliptic operator is considered. The numerical sensitiveness to variations of the coefficients of the operator are investigated for two classes of preconditioning matrices: one arising from finite differences, the other from finite elements. The preconditioned system is solved by a conjugate gradient type method, and by a Dufort-Frankel method with dynamical parameters. The methods are compared on some test problems with the Richardson method and with the minimal residual Richardson method.
A Jacobian-Free Newton Krylov Method for Mortar-Discretized Thermomechanical Contact Problems
Glen Hansen
2011-07-01
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear reactor fuel rod, which consists of cylindrical pellets of uranium dioxide (UO2) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. The accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.
A Jacobian-free Newton Krylov method for mortar-discretized thermomechanical contact problems
Hansen, Glen
2011-07-20
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear fuel rod, which consists of cylindrical pellets of uranium dioxide (UO{sub 2}) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. The accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective is to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.
NASA Astrophysics Data System (ADS)
Jiang, Tian; Zhang, Yong-Tao
2013-11-01
Implicit integration factor (IIF) methods are originally a class of efficient “exactly linear part” time discretization methods for solving time-dependent partial differential equations (PDEs) with linear high order terms and stiff lower order nonlinear terms. For complex systems (e.g. advection-diffusion-reaction (ADR) systems), the highest order derivative term can be nonlinear, and nonlinear nonstiff terms and nonlinear stiff terms are often mixed together. High order weighted essentially non-oscillatory (WENO) methods are often used to discretize the hyperbolic part in ADR systems. There are two open problems on IIF methods for solving ADR systems: (1) how to obtain higher than the second order global time discretization accuracy; (2) how to design IIF methods for solving fully nonlinear PDEs, i.e., the highest order terms are nonlinear. In this paper, we solve these two problems by developing new Krylov IIF-WENO methods to deal with both semilinear and fully nonlinear advection-diffusion-reaction equations. The methods can be designed for arbitrary order of accuracy. The stiffness of the system is resolved well and the methods are stable by using time step sizes which are just determined by the nonstiff hyperbolic part of the system. Large time step size computations are obtained. We analyze the stability and truncation errors of the schemes. Numerical examples of both scalar equations and systems in two and three spatial dimensions are shown to demonstrate the accuracy, efficiency and robustness of the methods.
Preconditioning and the limit to the incompressible flow equations
NASA Technical Reports Server (NTRS)
Turkel, E.; Fiterman, A.; Vanleer, B.
1993-01-01
The use of preconditioning methods to accelerate the convergence to a steady state for both the incompressible and compressible fluid dynamic equations are considered. The relation between them for both the continuous problem and the finite difference approximation is also considered. The analysis relies on the inviscid equations. The preconditioning consists of a matrix multiplying the time derivatives. Hence, the steady state of the preconditioned system is the same as the steady state of the original system. For finite difference methods the preconditioning can change and improve the steady state solutions. An application to flow around an airfoil is presented.
Ischemic preconditioning protects against gap junctional uncoupling in cardiac myofibroblasts.
Sundset, Rune; Cooper, Marie; Mikalsen, Svein-Ole; Ytrehus, Kirsti
2004-01-01
Ischemic preconditioning increases the heart's tolerance to a subsequent longer ischemic period. The purpose of this study was to investigate the role of gap junction communication in simulated preconditioning in cultured neonatal rat cardiac myofibroblasts. Gap junctional intercellular communication was assessed by Lucifer yellow dye transfer. Preconditioning preserved intercellular coupling after prolonged ischemia. An initial reduction in coupling in response to the preconditioning stimulus was also observed. This may protect neighboring cells from damaging substances produced during subsequent regional ischemia in vivo, and may preserve gap junctional communication required for enhanced functional recovery during subsequent reperfusion.
Implicit preconditioned WENO scheme for steady viscous flow computation
NASA Astrophysics Data System (ADS)
Huang, Juan-Chen; Lin, Herng; Yang, Jaw-Yen
2009-02-01
A class of lower-upper symmetric Gauss-Seidel implicit weighted essentially nonoscillatory (WENO) schemes is developed for solving the preconditioned Navier-Stokes equations of primitive variables with Spalart-Allmaras one-equation turbulence model. The numerical flux of the present preconditioned WENO schemes consists of a first-order part and high-order part. For first-order part, we adopt the preconditioned Roe scheme and for the high-order part, we employ preconditioned WENO methods. For comparison purpose, a preconditioned TVD scheme is also given and tested. A time-derivative preconditioning algorithm is devised and a discriminant is devised for adjusting the preconditioning parameters at low Mach numbers and turning off the preconditioning at intermediate or high Mach numbers. The computations are performed for the two-dimensional lid driven cavity flow, low subsonic viscous flow over S809 airfoil, three-dimensional low speed viscous flow over 6:1 prolate spheroid, transonic flow over ONERA-M6 wing and hypersonic flow over HB-2 model. The solutions of the present algorithms are in good agreement with the experimental data. The application of the preconditioned WENO schemes to viscous flows at all speeds not only enhances the accuracy and robustness of resolving shock and discontinuities for supersonic flows, but also improves the accuracy of low Mach number flow with complicated smooth solution structures.
Matrix preconditioning: a robust operation for optical linear algebra processors.
Ghosh, A; Paparao, P
1987-07-15
Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.
Random subspace ensemble for target recognition of ladar range image
NASA Astrophysics Data System (ADS)
Liu, Zheng-Jun; Li, Qi; Wang, Qi
2013-02-01
Laser detection and ranging (ladar) range images have attracted considerable attention in the field of automatic target recognition. Generally, it is difficult to collect a mass of range images for ladar in real applications. However, with small samples, the Hughes effect may occur when the number of features is larger than the size of the training samples. A random subspace ensemble of support vector machine (RSE-SVM) is applied to solve the problem. Three experiments were performed: (1) the performance comparison among affine moment invariants (AMIs), Zernike moment invariants (ZMIs) and their combined moment invariants (CMIs) based on different size training sets using single SVM; (2) the impact analysis of the different number of features about the RSE-SVM and semi-random subspace ensemble of support vector machine; (3) the performance comparison between the RSE-SVM and the CMIs with SVM ensembles. The experiment's results demonstrate that the RSE-SVM is able to relieve the Hughes effect and perform better than ZMIs with single SVM and CMIs with SVM ensembles.
Conformal Laplace superintegrable systems in 2D: polynomial invariant subspaces
NASA Astrophysics Data System (ADS)
Escobar-Ruiz, M. A.; Miller, Willard, Jr.
2016-07-01
2nd-order conformal superintegrable systems in n dimensions are Laplace equations on a manifold with an added scalar potential and 2n-1 independent 2nd order conformal symmetry operators. They encode all the information about Helmholtz (eigenvalue) superintegrable systems in an efficient manner: there is a 1-1 correspondence between Laplace superintegrable systems and Stäckel equivalence classes of Helmholtz superintegrable systems. In this paper we focus on superintegrable systems in two-dimensions, n = 2, where there are 44 Helmholtz systems, corresponding to 12 Laplace systems. For each Laplace equation we determine the possible two-variate polynomial subspaces that are invariant under the action of the Laplace operator, thus leading to families of polynomial eigenfunctions. We also study the behavior of the polynomial invariant subspaces under a Stäckel transform. The principal new results are the details of the polynomial variables and the conditions on parameters of the potential corresponding to polynomial solutions. The hidden gl 3-algebraic structure is exhibited for the exact and quasi-exact systems. For physically meaningful solutions, the orthogonality properties and normalizability of the polynomials are presented as well. Finally, for all Helmholtz superintegrable solvable systems we give a unified construction of one-dimensional (1D) and two-dimensional (2D) quasi-exactly solvable potentials possessing polynomial solutions, and a construction of new 2D PT-symmetric potentials is established.
A Subspace Method for Dynamical Estimation of Evoked Potentials
Georgiadis, Stefanos D.; Ranta-aho, Perttu O.; Tarvainen, Mika P.; Karjalainen, Pasi A.
2007-01-01
It is a challenge in evoked potential (EP) analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing). We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements. PMID:18288257
ERIC Educational Resources Information Center
Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.
2011-01-01
This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…
Crystallizing highly-likely subspaces that contain an unknown quantum state of light
NASA Astrophysics Data System (ADS)
Teo, Yong Siah; Mogilevtsev, Dmitri; Mikhalychev, Alexander; Řeháček, Jaroslav; Hradil, Zdeněk
2016-12-01
In continuous-variable tomography, with finite data and limited computation resources, reconstruction of a quantum state of light is performed on a finite-dimensional subspace. In principle, the data themselves encode all information about the relevant subspace that physically contains the state. We provide a straightforward and numerically feasible procedure to uniquely determine the appropriate reconstruction subspace by extracting this information directly from the data for any given unknown quantum state of light and measurement scheme. This procedure makes use of the celebrated statistical principle of maximum likelihood, along with other validation tools, to grow an appropriate seed subspace into the optimal reconstruction subspace, much like the nucleation of a seed into a crystal. Apart from using the available measurement data, no other assumptions about the source or preconceived parametric model subspaces are invoked. This ensures that no spurious reconstruction artifacts are present in state reconstruction as a result of inappropriate choices of the reconstruction subspace. The procedure can be understood as the maximum-likelihood reconstruction for quantum subspaces, which is an analog to, and fully compatible with that for quantum states.
Crystallizing highly-likely subspaces that contain an unknown quantum state of light
Teo, Yong Siah; Mogilevtsev, Dmitri; Mikhalychev, Alexander; Řeháček, Jaroslav; Hradil, Zdeněk
2016-01-01
In continuous-variable tomography, with finite data and limited computation resources, reconstruction of a quantum state of light is performed on a finite-dimensional subspace. In principle, the data themselves encode all information about the relevant subspace that physically contains the state. We provide a straightforward and numerically feasible procedure to uniquely determine the appropriate reconstruction subspace by extracting this information directly from the data for any given unknown quantum state of light and measurement scheme. This procedure makes use of the celebrated statistical principle of maximum likelihood, along with other validation tools, to grow an appropriate seed subspace into the optimal reconstruction subspace, much like the nucleation of a seed into a crystal. Apart from using the available measurement data, no other assumptions about the source or preconceived parametric model subspaces are invoked. This ensures that no spurious reconstruction artifacts are present in state reconstruction as a result of inappropriate choices of the reconstruction subspace. The procedure can be understood as the maximum-likelihood reconstruction for quantum subspaces, which is an analog to, and fully compatible with that for quantum states. PMID:27905511
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
Helium induces preconditioning in human endothelium in vivo.
Smit, Kirsten F; Oei, Gezina T M L; Brevoord, Daniel; Stroes, Erik S; Nieuwland, Rienk; Schlack, Wolfgang S; Hollmann, Markus W; Weber, Nina C; Preckel, Benedikt
2013-01-01
Helium protects myocardium by inducing preconditioning in animals. We investigated whether human endothelium is preconditioned by helium inhalation in vivo. Forearm ischemia-reperfusion (I/R) in healthy volunteers (each group n = 10) was performed by inflating a blood pressure cuff for 20 min. Endothelium-dependent and endothelium-independent responses were measured after cumulative dose-response infusion of acetylcholine and sodium nitroprusside, respectively, at baseline and after 15 min of reperfusion using strain-gauge, venous occlusion plethysmography. Helium preconditioning was applied by inhalation of helium (79% helium, 21% oxygen) either 15 min (helium early preconditioning [He-EPC]) or 24 h before I/R (helium late preconditioning). Additional measurements of He-EPC were done after blockade of endothelial nitric oxide synthase. Plasma levels of cytokines, adhesion molecules, and cell-derived microparticles were determined. Forearm I/R attenuated endothelium-dependent vasodilation (acetylcholine) with unaltered endothelium-independent response (sodium nitroprusside). Both He-EPC and helium late preconditioning attenuated I/R-induced endothelial dysfunction (max increase in forearm blood flow in response to acetylcholine after I/R was 180 ± 24% [mean ± SEM] without preconditioning, 573 ± 140% after He-EPC, and 290 ± 32% after helium late preconditioning). Protection of helium was comparable to ischemic preconditioning (max forearm blood flow 436 ± 38%) and was not abolished after endothelial nitric oxide synthase blockade. He-EPC did not affect plasma levels of cytokines, adhesion molecules, or microparticles. Helium is a nonanesthetic, nontoxic gas without hemodynamic side effects, which induces early and late preconditioning of human endothelium in vivo. Further studies have to investigate whether helium may be an instrument to induce endothelial preconditioning in patients with cardiovascular risk factors.
Preconditioning, postconditioning and their application to clinical cardiology.
Kloner, Robert A; Rezkalla, Shereif H
2006-05-01
Ischemic preconditioning is a well-established phenomenon first described in experimental preparations in which brief episodes of ischemia/reperfusion applied prior to a longer coronary artery occlusion reduce myocardial infarct size. There are ample correlates of ischemic preconditioning in the clinical realm. Preconditioning mimetic agents that stimulate the biochemical pathways of ischemic preconditioning and protect the heart without inducing ischemia have been examined in numerous experimental studies. However, despite the effectiveness of ischemic preconditioning and preconditioning mimetics for protecting ischemic myocardium, there are no preconditioning-based therapies that are routinely used in clinical medicine at the current time. Part of the problem is the need to administer therapy prior to the known ischemic event. Other issues are that percutaneous coronary intervention technology has advanced so far (with the development of stents and drug-eluting stents) that ischemic preconditioning or preconditioning mimetics have not been needed in most interventional cases. Recent clinical trials such as AMISTAD I and II (Acute Myocardial Infarction STudy of ADenosine) suggest that some preconditioning mimetics may reduce myocardial infarct size when given along with reperfusion or, as in the IONA trial, have benefit on clinical events when administered chronically in patients with known coronary artery disease. It is possible that some of the benefit described for adenosine in the AMISTAD 1 and 2 trials represents a manifestation of the recently described postconditioning phenomenon. It is probable that postconditioning--in which reperfusion is interrupted with brief coronary occlusions and reperfusion sequences--is more likely than preconditioning to be feasible as a clinical application to patients undergoing percutaneous coronary intervention for acute myocardial infarction.
Towards bulk based preconditioning for quantum dotcomputations
Dongarra, Jack; Langou, Julien; Tomov, Stanimire; Channing,Andrew; Marques, Osni; Vomel, Christof; Wang, Lin-Wang
2006-05-25
This article describes how to accelerate the convergence of Preconditioned Conjugate Gradient (PCG) type eigensolvers for the computation of several states around the band gap of colloidal quantum dots. Our new approach uses the Hamiltonian from the bulk materials constituent for the quantum dot to design an efficient preconditioner for the folded spectrum PCG method. The technique described shows promising results when applied to CdSe quantum dot model problems. We show a decrease in the number of iteration steps by at least a factor of 4 compared to the previously used diagonal preconditioner.
H(curl) Auxiliary Mesh Preconditioning
Kolev, T V; Pasciak, J E; Vassilevski, P S
2006-08-31
This paper analyzes a two-level preconditioning scheme for H(curl) bilinear forms. The scheme utilizes an auxiliary problem on a related mesh that is more amenable for constructing optimal order multigrid methods. More specifically, we analyze the case when the auxiliary mesh only approximately covers the original domain. The latter assumption is important since it allows for easy construction of nested multilevel spaces on regular auxiliary meshes. Numerical experiments in both two and three space dimensions illustrate the optimal performance of the method.
Domain-decomposed preconditionings for transport operators
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Gropp, William D.; Keyes, David E.
1991-01-01
The performance was tested of five different interface preconditionings for domain decomposed convection diffusion problems, including a novel one known as the spectral probe, while varying mesh parameters, Reynolds number, ratio of subdomain diffusion coefficients, and domain aspect ratio. The preconditioners are representative of the range of practically computable possibilities that have appeared in the domain decomposition literature for the treatment of nonoverlapping subdomains. It is shown that through a large number of numerical examples that no single preconditioner can be considered uniformly superior or uniformly inferior to the rest, but that knowledge of particulars, including the shape and strength of the convection, is important in selecting among them in a given problem.
Parallel preconditioning techniques for sparse CG solvers
Basermann, A.; Reichel, B.; Schelthoff, C.
1996-12-31
Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.
Extremely Intense Magnetospheric Substorms : External Triggering? Preconditioning?
NASA Astrophysics Data System (ADS)
Tsurutani, Bruce; Echer, Ezequiel; Hajra, Rajkumar
2016-07-01
We study particularly intense substorms using a variety of near-Earth spacecraft data and ground observations. We will relate the solar cycle dependences of events, determine whether the supersubstorms are externally or internally triggered, and their relationship to other factors such as magnetospheric preconditioning. If time permits, we will explore the details of the events and whether they are similar to regular (Akasofu, 1964) substorms or not. These intense substorms are an important feature of space weather since they may be responsible for power outages.
Fast permutation preconditioning for fractional diffusion equations.
Wang, Sheng-Feng; Huang, Ting-Zhu; Gu, Xian-Ming; Luo, Wei-Hua
2016-01-01
In this paper, an implicit finite difference scheme with the shifted Grünwald formula, which is unconditionally stable, is used to discretize the fractional diffusion equations with constant diffusion coefficients. The coefficient matrix possesses the Toeplitz structure and the fast Toeplitz matrix-vector product can be utilized to reduce the computational complexity from [Formula: see text] to [Formula: see text], where N is the number of grid points. Two preconditioned iterative methods, named bi-conjugate gradient method for Toeplitz matrix and bi-conjugate residual method for Toeplitz matrix, are proposed to solve the relevant discretized systems. Finally, numerical experiments are reported to show the effectiveness of our preconditioners.
Review of Preconditioning Methods for Fluid Dynamics
1992-09-01
applies equally to all preconditioners e.g. that of Van Leer et. al. which will now be presented. The Van Leer, Lee, Roe preconditioning [55] for...a(dd an artificial viscosity. Accuracy is im1proved for low Mach number flows if the preconditioner is applied only to t lie physical convective and...w j u()s M4u’, h)t appe~var .Jtwurrral ()f C(’oII~putationial Phyisic ,". 11 Iý ( -ltrii. AX .. A .\\itinfi c 7(/Alt .1/1( 1dfr *Si/’olil nq li(o1/ itp
Ren, C; Gao, X; Steinberg, G K; Zhao, H
2008-02-19
Remote ischemic preconditioning is an emerging concept for stroke treatment, but its protection against focal stroke has not been established. We tested whether remote preconditioning, performed in the ipsilateral hind limb, protects against focal stroke and explored its protective parameters. Stroke was generated by a permanent occlusion of the left distal middle cerebral artery (MCA) combined with a 30 min occlusion of the bilateral common carotid arteries (CCA) in male rats. Limb preconditioning was generated by 5 or 15 min occlusion followed with the same period of reperfusion of the left hind femoral artery, and repeated for two or three cycles. Infarct was measured 2 days later. The results showed that rapid preconditioning with three cycles of 15 min performed immediately before stroke reduced infarct size from 47.7+/-7.6% of control ischemia to 9.8+/-8.6%; at two cycles of 15 min, infarct was reduced to 24.7+/-7.3%; at two cycles of 5 min, infarct was not reduced. Delayed preconditioning with three cycles of 15 min conducted 2 days before stroke also reduced infarct to 23.0+/-10.9%, but with two cycles of 15 min it offered no protection. The protective effects at these two therapeutic time windows of remote preconditioning are consistent with those of conventional preconditioning, in which the preconditioning ischemia is induced in the brain itself. Unexpectedly, intermediate preconditioning with three cycles of 15 min performed 12 h before stroke also reduced infarct to 24.7+/-4.7%, which contradicts the current dogma for therapeutic time windows for the conventional preconditioning that has no protection at this time point. In conclusion, remote preconditioning performed in one limb protected against ischemic damage after focal cerebral ischemia.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
A preconditioned formulation of the Cauchy-Riemann equations
NASA Technical Reports Server (NTRS)
Phillips, T. N.
1983-01-01
A preconditioning of the Cauchy-Riemann equations which results in a second-order system is described. This system is shown to have a unique solution if the boundary conditions are chosen carefully. This choice of boundary condition enables the solution of the first-order system to be retrieved. A numerical solution of the preconditioned equations is obtained by the multigrid method.
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
First Applications of the New Parallel Krylov Solver for MODFLOW on a National and Global Scale
NASA Astrophysics Data System (ADS)
Verkaik, J.; Hughes, J. D.; Sutanudjaja, E.; van Walsum, P.
2016-12-01
Integrated high-resolution hydrologic models are increasingly being used for evaluating water management measures at field scale. Their drawbacks are large memory requirements and long run times. Examples of such models are The Netherlands Hydrological Instrument (NHI) model and the PCRaster Global Water Balance (PCR-GLOBWB) model. Typical simulation periods are 30-100 years with daily timesteps. The NHI model predicts water demands in periods of drought, supporting operational and long-term water-supply decisions. The NHI is a state-of-the-art coupling of several models: a 7-layer MODFLOW groundwater model ( 6.5M 250m cells), a MetaSWAP model for the unsaturated zone (Richards emulator of 0.5M cells), and a surface water model (MOZART-DM). The PCR-GLOBWB model provides a grid-based representation of global terrestrial hydrology and this work uses the version that includes a 2-layer MODFLOW groundwater model ( 4.5M 10km cells). The Parallel Krylov Solver (PKS) speeds up computation by both distributed memory parallelization (Message Passing Interface) and shared memory parallelization (Open Multi-Processing). PKS includes conjugate gradient, bi-conjugate gradient stabilized, and generalized minimal residual linear accelerators that use an overlapping additive Schwarz domain decomposition preconditioner. PKS can be used for both structured and unstructured grids and has been fully integrated in MODFLOW-USG using METIS partitioning and in iMODFLOW using RCB partitioning. iMODFLOW is an accelerated version of MODFLOW-2005 that is implicitly and online coupled to MetaSWAP. Results for benchmarks carried out on the Cartesius Dutch supercomputer (https://userinfo.surfsara.nl/systems/cartesius) for the PCRGLOB-WB model and on a 2x16 core Windows machine for the NHI model show speedups up to 10-20 and 5-10, respectively.
Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2017-08-07
This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Inhalational Anesthetics as Preconditioning Agents in Ischemic Brain
Wang, Lan; Traystman, Richard J.; Murphy, Stephanie J.
2008-01-01
SUMMARY While many pharmacological agents have been shown to protect the brain from cerebral ischemia in animal models, none have translated successfully to human patients. One potential clinical neuroprotective strategy in humans may involve increasing the brain’s tolerance to ischemia by pre-ischemic conditioning (preconditioning). There are many methods to induce tolerance via preconditioning such as: ischemia itself, pharmacological, hypoxia, endotoxin, and others. Inhalational anesthetic agents have also been shown to result in brain preconditioning. Mechanisms responsible for brain preconditioning are many, complex, and unclear and may involve Akt activation, ATP-sensitive potassium channels, and nitric oxide, amongst many others. Anesthetics, however, may play an important and unique role as preconditioning agents, particularly during the perioperative period. PMID:17962069
Universal quantum computation in waveguide QED using decoherence free subspaces
NASA Astrophysics Data System (ADS)
Paulisch, V.; Kimble, H. J.; González-Tudela, A.
2016-04-01
The interaction of quantum emitters with one-dimensional photon-like reservoirs induces strong and long-range dissipative couplings that give rise to the emergence of the so-called decoherence free subspaces (DFSs) which are decoupled from dissipation. When introducing weak perturbations on the emitters, e.g., driving, the strong collective dissipation enforces an effective coherent evolution within the DFS. In this work, we show explicitly how by introducing single-site resolved drivings, we can use the effective dynamics within the DFS to design a universal set of one and two-qubit gates within the DFS of an ensemble of two-level atom-like systems. Using Liouvillian perturbation theory we calculate the scaling with the relevant figures of merit of the systems, such as the Purcell factor and imperfect control of the drivings. Finally, we compare our results with previous proposals using atomic Λ systems in leaky cavities.
Holonomic Quantum Computation by Time dependent Decoherence Free Subspaces
NASA Astrophysics Data System (ADS)
Lin, J. N.; Liang, Y.; Yang, H. D.; Gui, J.; Wu, S. L.
2017-04-01
We show how to realize nonadiabatic holonomic quantum computation in time-dependent decoherence free subspaces (TDFSs). In our scheme, the holonomy is not generated by computational bases in DFSs but time-dependent bases of TDFSs. Therefore, different from the traditional DFSs, the ancillary systems are not necessary in inducing holonomy, which saves qubits used in the holonomic quantum computation. We also analyze the symmetry of the N-qubits system which couples to a common squeezed field. The results show that, there are several independent DFSs presented in Hilbert space, which is determined by eigenvalues of Lindblad operators. Combining the scheme and the model proposed in this paper, we show that, the one-qubit controllable phase gate can be realized by only two physical qubits.
Phenotype Recognition with Combined Features and Random Subspace Classifier Ensemble
2011-01-01
Background Automated, image based high-content screening is a fundamental tool for discovery in biological science. Modern robotic fluorescence microscopes are able to capture thousands of images from massively parallel experiments such as RNA interference (RNAi) or small-molecule screens. As such, efficient computational methods are required for automatic cellular phenotype identification capable of dealing with large image data sets. In this paper we investigated an efficient method for the extraction of quantitative features from images by combining second order statistics, or Haralick features, with curvelet transform. A random subspace based classifier ensemble with multiple layer perceptron (MLP) as the base classifier was then exploited for classification. Haralick features estimate image properties related to second-order statistics based on the grey level co-occurrence matrix (GLCM), which has been extensively used for various image processing applications. The curvelet transform has a more sparse representation of the image than wavelet, thus offering a description with higher time frequency resolution and high degree of directionality and anisotropy, which is particularly appropriate for many images rich with edges and curves. A combined feature description from Haralick feature and curvelet transform can further increase the accuracy of classification by taking their complementary information. We then investigate the applicability of the random subspace (RS) ensemble method for phenotype classification based on microscopy images. A base classifier is trained with a RS sampled subset of the original feature set and the ensemble assigns a class label by majority voting. Results Experimental results on the phenotype recognition from three benchmarking image sets including HeLa, CHO and RNAi show the effectiveness of the proposed approach. The combined feature is better than any individual one in the classification accuracy. The ensemble model produces
Holonomic Quantum Computation by Time dependent Decoherence Free Subspaces
NASA Astrophysics Data System (ADS)
Lin, J. N.; Liang, Y.; Yang, H. D.; Gui, J.; Wu, S. L.
2017-01-01
We show how to realize nonadiabatic holonomic quantum computation in time-dependent decoherence free subspaces (TDFSs). In our scheme, the holonomy is not generated by computational bases in DFSs but time-dependent bases of TDFSs. Therefore, different from the traditional DFSs, the ancillary systems are not necessary in inducing holonomy, which saves qubits used in the holonomic quantum computation. We also analyze the symmetry of the N-qubits system which couples to a common squeezed field. The results show that, there are several independent DFSs presented in Hilbert space, which is determined by eigenvalues of Lindblad operators. Combining the scheme and the model proposed in this paper, we show that, the one-qubit controllable phase gate can be realized by only two physical qubits.
Inverse transport calculations in optical imaging with subspace optimization algorithms
Ding, Tian Ren, Kui
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
A decoherence-free subspace in a charge quadrupole qubit
NASA Astrophysics Data System (ADS)
Friesen, Mark; Ghosh, Joydip; Eriksson, M. A.; Coppersmith, S. N.
2017-06-01
Quantum computing promises significant speed-up for certain types of computational problems. However, robust implementations of semiconducting qubits must overcome the effects of charge noise that currently limit coherence during gate operations. Here we describe a scheme for protecting solid-state qubits from uniform electric field fluctuations by generalizing the concept of a decoherence-free subspace for spins, and we propose a specific physical implementation: a quadrupole charge qubit formed in a triple quantum dot. The unique design of the quadrupole qubit enables a particularly simple pulse sequence for suppressing the effects of noise during gate operations. Simulations yield gate fidelities 10-1,000 times better than traditional charge qubits, depending on the magnitude of the environmental noise. Our results suggest that any qubit scheme employing Coulomb interactions (for example, encoded spin qubits or two-qubit gates) could benefit from such a quadrupolar design.
Spatial Bell-State Generation without Transverse Mode Subspace Postselection
NASA Astrophysics Data System (ADS)
Kovlakov, E. V.; Bobrov, I. B.; Straupe, S. S.; Kulik, S. P.
2017-01-01
Spatial states of single photons and spatially entangled photon pairs are becoming an important resource in quantum communication. This additional degree of freedom provides an almost unlimited information capacity, making the development of high-quality sources of spatial entanglement a well-motivated research direction. We report an experimental method for generation of photon pairs in a maximally entangled spatial state. In contrast to existing techniques, the method does not require postselection of a particular subspace of spatial modes and allows one to use the full photon flux from the nonlinear crystal, providing a tool for creating high-brightness sources of pure spatially entangled photons. Such sources are a prerequisite for emerging applications in free-space quantum communication.
Inverse transport calculations in optical imaging with subspace optimization algorithms
NASA Astrophysics Data System (ADS)
Ding, Tian; Ren, Kui
2014-09-01
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
Multiresolution subspace-based optimization method for inverse scattering problems.
Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea
2011-10-01
This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.
Thermal preconditioning of mountain permafrost towards instability
NASA Astrophysics Data System (ADS)
Hauck, Christian; Etzelmüller, Bernd; Hilbich, Christin; Isaksen, Ketil; Mollaret, Coline; Pellet, Cécile; Westermann, Sebastian
2017-04-01
Warming permafrost has been detected worldwide in recent years and is projected to continue during the next century as shown in many modelling studies from the polar and mountain regions. In mountain regions, this can lead to potentially hazardous impacts on short time-scales by an increased tendency for slope instabilities. However, the time scale of permafrost thaw and the role of the ice content for determining the strength and rate of permafrost warming and degradation (= development of talik) are still unclear, especially in highly heterogeneous terrain. Observations of permafrost temperatures near the freezing point show complex inter-annual responses to climate forcing due to latent heat effects during thawing and the influence of the snow-cover, which is formed and modulated by highly non-linear processes itself. These effects are complicated by 3-dimensional hydrological processes and interactions between snow melt, infiltration and drainage which may also play an important role in the triggering of mass movements in steep permafrost slopes. In this contribution we demonstrate for the first time a preconditioning effect within near-surface layers in mountain permafrost that causes non-linear degradation and accelerates permafrost thaw. We hypothesise that an extreme regional or global temperature anomaly, such as the Central European summers 2003 and 2015 or the Northern European summers 2006 and 2014, will enhance permafrost degradation if the active layer and the top of the permafrost layer are already preconditioned, i.e. have reduced latent heat content. This preconditioning can already be effectuated by a singular warm year, leading to exceptionally strong melting of the ground ice in the near-surface layers. On sloping terrain and in a context of quasi-continuous atmospheric warming, this ice-loss can be considered as irreversible, as a large part of the melted water will drain/evaporate during the process, and the build-up of an equivalent amount of
Totzeck, Matthias; Hendgen-Cotta, Ulrike B.; French, Brent A.; Rassaf, Tienush
2016-01-01
Although urgently needed in clinical practice, a cardioprotective therapeutic approach against myocardial ischemia/ reperfusion injury remains to be established. Remote ischemic preconditioning (rIPC) and ischemic preconditioning (IPC) represent promising tools comprising three entities: the generation of a protective signal, the transfer of the signal to the target organ, and the response to the transferred signal resulting in cardioprotection. However, in light of recent scientific advances, many controversies arise regarding the efficacy of the underlying signaling. We here show methods for the generation of the signaling cascade by rIPC as well as IPC in a mouse model for in vivo myocardial ischemia/ reperfusion injury using highly reproducible approaches. This is accomplished by taking advantage of easily applicable preconditioning strategies compatible with the clinical setting. We describe methods for using laser Doppler perfusion imaging to monitor the cessation and recovery of perfusion in real time. The effects of preconditioning on cardiac function can also be assessed using ultrasound or magnetic resonance imaging approaches. On a cellular level, we confirm how tissue injury can be monitored using histological assessment of infarct size in conjunction with immunohistochemistry to assess both aspects in a single specimen. Finally, we outline, how the rIPC-associated signaling can be transferred to the target cell via conservation of the signal in the humoral (blood) compartment. This compilation of experimental protocols including a conditioning regimen comparable to the clinical setting should proof useful to both beginners and experts in the field of myocardial infarction, supplying information for the detailed procedures as well as troubleshooting guides. PMID:28066791
Maximum Likelihood Estimation for Multiple Camera Target Tracking on Grassmann Tangent Subspace.
Amini-Omam, Mojtaba; Torkamani-Azar, Farah; Ghorashi, Seyed Ali
2016-11-15
In this paper, we introduce a likelihood model for tracking the location of object in multiple view systems. Our proposed model transforms conventional nonlinear Euclidean estimation model to an estimation model based on the manifold tangent subspace. In this paper, we show that by decomposition of input noise into two parts and description of model by exponential map, real observations in the Euclidean geometry can be transformed to the manifold tangent subspace. Moreover, by obtained tangent subspace likelihood function, we propose two iterative and noniterative maximum likelihood estimation approaches which numerical results show their good performance.
Riemannian Optimization Method on Generalized Flag Manifolds for Complex and Subspace ICA
NASA Astrophysics Data System (ADS)
Nishimori, Yasunori; Akaho, Shotaro; Plumbley, Mark D.
2006-11-01
In this paper we introduce a new class of manifolds, generalized flag manifolds, for the complex and subspace ICA problems. A generalized flag manifold is a manifold consisting of subspaces which are orthogonal to each other. The class of generalized flag manifolds include the class of Grassmann manifolds. We extend the Riemannian optimization method to include this new class of manifolds by deriving the formulas for the natural gradient and geodesics on these manifolds. We show how the complex and subspace ICA problems can be solved by optimization of cost functions on a generalized flag manifold. Computer simulations demonstrate our algorithm gives good performance compared with the ordinary gradient descent method.
Hyperbaric oxygen preconditioning attenuates postoperative cognitive impairment in aged rats.
Sun, Li; Xie, Keliang; Zhang, Changsheng; Song, Rui; Zhang, Hong
2014-06-18
Cognitive decline after surgery in the elderly population is a major clinical problem with high morbidity. Hyperbaric oxygen (HBO) preconditioning can induce significant neuroprotection against acute neurological injury. We hypothesized that HBO preconditioning would prevent the development of postoperative cognitive impairment. Elderly male rats (20 months old) underwent stabilized tibial fracture operation under general anesthesia after HBO preconditioning (once a day for 5 days). Separate cohorts of animals were tested for cognitive function with fear conditioning and Y-maze tests, or euthanized at different times to assess the blood-brain barrier integrity, systemic and hippocampal proinflammatory cytokines, and caspase-3 activity. Animals exhibited significant cognitive impairment evidenced by a decreased percentage of freezing time and an increased number of learning trials on days 1, 3, and 7 after surgery, which were significantly prevented by HBO preconditioning. Furthermore, HBO preconditioning significantly ameliorated the increase in serum and hippocampal proinflammatory cytokines tumor necrosis factor-α, interleukin-1 β (IL-1β), IL-6, and high-mobility group protein 1 in surgery-challenged animals. Moreover, HBO preconditioning markedly improved blood-brain barrier integrity and caspase-3 activity in the hippocampus of surgery-challenged animals. These findings suggest that HBO preconditioning could significantly mitigate surgery-induced cognitive impairment, which is strongly associated with the reduction of systemic and hippocampal proinflammatory cytokines and caspase-3 activity.
Sarcosine preconditioning induces ischemic tolerance against global cerebral ischemia.
Pinto, M C X; Simão, F; da Costa, F L P; Rosa, D V; de Paiva, M J N; Resende, R R; Romano-Silva, M A; Gomez, M V; Gomez, R S
2014-06-20
Brain ischemic tolerance is an endogenous protective mechanism activated by a preconditioning stimulus that is closely related to N-methyl-d-aspartate receptor (NMDAR). Glycine transporter type 1 (GlyT-1) inhibitors potentiate NMDAR and suggest an alternative strategy for brain preconditioning. The aim of this work was to evaluate the effects of brain preconditioning induced by sarcosine, a GlyT-1 inhibitor, against global cerebral ischemia and its relation to NMDAR. Sarcosine was administered over 7 days (300 or 500 mg/kg/day, ip) before the induction of a global cerebral ischemia model in Wistar rats (male, 8-week-old). It was observed that sarcosine preconditioning reduced cell death in rat hippocampi submitted to cerebral ischemia. Hippocampal levels of glycine were decreased in sarcosine-treated animals, which was associated with a reduction of [(3)H] glycine uptake and a decrease in glycine transporter expression (GlyT-1 and GlyT-2). The expression of glycine receptors and the NR1 and NR2A subunits of NMDAR were not affected by sarcosine preconditioning. However, sarcosine preconditioning reduced the expression of the NR2B subunits of NMDAR. In conclusion, these data demonstrate that sarcosine preconditioning induces ischemic tolerance against global cerebral ischemia and this neuroprotective state is associated with changes in glycine transport and reduction of NR2B-containing NMDAR expression. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Remote ischemic preconditioning enhances fracture healing
Çatma, Mehmet Faruk; Şeşen, Hakan; Aydın, Aytekin; Ünlü, Serhan; Demirkale, İsmail; Altay, Murat
2015-01-01
Purpose We hypothesized that RIP accelerates fracture healing. Methods Rats (n = 48) were used for the technique of ischemic preconditioning involved applying 35 min of intermittent pneumatic tourniquet for 7 cycles of 5 min each to the fractured hind limb. Results We observed greater callus maturity in RIP group at first week after fracture when compared to controls (p < 0,0001). The serum MDA levels demonstrated statistically lower values at the RIP group at the first week after fracture; however, there were not significant differences at 3rd and 5th weeks (p = 0.0001, p = 0.725, p = 0.271, respectively). Conclusions Greater callus maturity was obtained in RIP group. PMID:26566314
Cellular and Molecular Neurobiology of Brain Preconditioning
Cadet, Jean Lud; Krasnova, Irina N.
2009-01-01
The tolerant brain which is a consequence of adaptation to repeated non-lethal insults is accompanied by the up-regulation of protective mechanisms and the down-regulation of pro-degenerative pathways. During the past 20 years, evidence has accumulated to suggest that protective mechanisms include increased production of chaperones, trophic factors, and other anti-apoptotic proteins. In contrast, preconditioning can cause substantial dampening of the organism’s metabolic state and decreased expression of pro-apoptotic proteins. Recent microarray analyses have also helped to document a role of several molecular pathways in the induction of the brain refractory state. The present review highlights some of these findings and suggests that a better understanding of these mechanisms will inform treatment of a number of neuropsychiatric disorders. PMID:19153843
Cellular and molecular neurobiology of brain preconditioning.
Cadet, Jean Lud; Krasnova, Irina N
2009-02-01
The tolerant brain which is a consequence of adaptation to repeated nonlethal insults is accompanied by the upregulation of protective mechanisms and the downregulation of prodegenerative pathways. During the past 20 years, evidence has accumulated to suggest that protective mechanisms include increased production of chaperones, trophic factors, and other antiapoptotic proteins. In contrast, preconditioning can cause substantial dampening of the organism's metabolic state and decreased expression of proapoptotic proteins. Recent microarray analyses have also helped to document a role of several molecular pathways in the induction of the brain refractory state. The present review highlights some of these findings and suggests that a better understanding of these mechanisms will inform treatment of a number of neuropsychiatric disorders.
A fast, preconditioned conjugate gradient Toeplitz solver
NASA Technical Reports Server (NTRS)
Pan, Victor; Schrieber, Robert
1989-01-01
A simple factorization is given of an arbitrary hermitian, positive definite matrix in which the factors are well-conditioned, hermitian, and positive definite. In fact, given knowledge of the extreme eigenvalues of the original matrix A, an optimal improvement can be achieved, making the condition numbers of each of the two factors equal to the square root of the condition number of A. This technique is to applied to the solution of hermitian, positive definite Toeplitz systems. Large linear systems with hermitian, positive definite Toeplitz matrices arise in some signal processing applications. A stable fast algorithm is given for solving these systems that is based on the preconditioned conjugate gradient method. The algorithm exploits Toeplitz structure to reduce the cost of an iteration to O(n log n) by applying the fast Fourier Transform to compute matrix-vector products. Matrix factorization is used as a preconditioner.
Remote ischaemic preconditioning: closer to the mechanism?
Gleadle, Jonathan M.; Mazzone, Annette
2016-01-01
Brief periods of ischaemia followed by reperfusion of one tissue such as skeletal muscle can confer subsequent protection against ischaemia-induced injury in other organs such as the heart. Substantial evidence of this effect has been accrued in experimental animal models. However, the translation of this phenomenon to its use as a therapy in ischaemic disease has been largely disappointing without clear evidence of benefit in humans. Recently, innovative experimental observations have suggested that remote ischaemic preconditioning (RIPC) may be largely mediated through hypoxic inhibition of the oxygen-sensing enzyme PHD2, leading to enhanced levels of alpha-ketoglutarate and subsequent increases in circulating kynurenic acid (KYNA). These observations provide vital insights into the likely mechanisms of RIPC and a route to manipulating this mechanism towards therapeutic benefit by direct alteration of KYNA, alpha-ketoglutarate levels, PHD inhibition, or pharmacological targeting of the incompletely understood cardioprotective mechanism activated by KYNA. PMID:28163901
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
Hyperbaric oxygen preconditioning protects rats against CNS oxygen toxicity.
Arieli, Yehuda; Kotler, Doron; Eynan, Mirit; Hochman, Ayala
2014-06-15
We examined the hypothesis that repeated exposure to non-convulsive hyperbaric oxygen (HBO) as preconditioning provides protection against central nervous system oxygen toxicity (CNS-OT). Four groups of rats were used in the study. Rats in the control and the negative control (Ctl-) groups were kept in normobaric air. Two groups of rats were preconditioned to non-convulsive HBO at 202 kPa for 1h once every other day for a total of three sessions. Twenty-four hours after preconditioning, one of the preconditioned groups and the control rats were exposed to convulsive HBO at 608 kPa, and latency to CNS-OT was measured. Ctl- rats and the second preconditioned group (PrC-) were not subjected to convulsive HBO exposure. Tissues harvested from the hippocampus and frontal cortex were evaluated for enzymatic activity and nitrotyrosine levels. In the group exposed to convulsive oxygen at 608 kPa, latency to CNS-OT increased from 12.8 to 22.4 min following preconditioning. A significant decrease in the activity of glutathione reductase and glucose-6-phosphate dehydrogenase, and a significant increase in glutathione peroxidase activity, was observed in the hippocampus of preconditioned rats. Nitrotyrosine levels were significantly lower in the preconditioned animals, the highest level being observed in the control rats. In the cortex of the preconditioned rats, a significant increase was observed in glutathione S-transferase and glutathione peroxidase activity. Repeated exposure to non-convulsive HBO provides protection against CNS-OT. The protective mechanism involves alterations in the enzymatic activity of the antioxidant system and lower levels of peroxynitrite, mainly in the hippocampus. Copyright © 2014 Elsevier B.V. All rights reserved.
The Influence of Diabetes Mellitus in Myocardial Ischemic Preconditioning.
Rezende, Paulo Cury; Rahmi, Rosa Maria; Hueb, Whady
Ischemic preconditioning (IP) is a powerful mechanism of protection discovered in the heart in which ischemia paradoxically protects the myocardium against other ischemic insults. Many factors such as diseases and medications may influence IP expression. Although diabetes poses higher cardiovascular risk, the physiopathology underlying this condition is uncertain. Moreover, although diabetes is believed to alter intracellular pathways related to myocardial protective mechanisms, it is still controversial whether diabetes may interfere with ischemic preconditioning and whether this might influence clinical outcomes. This review article looks at published reports with animal models and humans that tried to evaluate the possible influence of diabetes in myocardial ischemic preconditioning.
Preconditioned Minimal Residual Methods for Chebyshev Spectral Caluclations
NASA Technical Reports Server (NTRS)
Canuto, C.; Quarteroni, A.
1983-01-01
The problem of preconditioning the pseudospectral Chebyshev approximation of an elliptic operator is considered. The numerical sensitiveness to variations of the coefficients of the operator are investigated for two classes of preconditioning matrices: one arising from finite differences, the other from finite elements. The preconditioned system is solved by a conjugate gradient type method, and by a DuFort-Frankel method with dynamical parameters. The methods are compared on some test problems with the Richardson method and with the minimal residual Richardson method.
Preconditioning principles for preventing sports injuries in adolescents and children.
Dollard, Mark D; Pontell, David; Hallivis, Robert
2006-01-01
Preseason preconditioning can be accomplished well over a 4-week period with a mandatory period of rest as we have discussed. Athletic participation must be guided by a gradual increase of skills performance in the child assessed after a responsible preconditioning program applying physiologic parameters as outlined. Clearly, designing a preconditioning program is a dynamic process when accounting for all the variables in training discussed so far. Despite the physiologic demands of sport and training, we still need to acknowledge the psychologic maturity and welfare of the child so as to ensure that the sport environment is a wholesome and emotionally rewarding experience.
CitcomSX: Robust preconditioning in CitcomS via PETSc
NASA Astrophysics Data System (ADS)
May, D.; Knepley, M. G.; Gurnis, M.
2009-12-01
The Citcom family of mantle convection codes are in wide spread use through out the geodynamics community. Since the inception of the original Cartesian version written in the early 1990's, many variants have been developed. Two important contributions were made by Shijie Zhong in the form of the parallel 3D Cartesian version and the parallel, full spherical version. Maintenance and support of CitcomS through CIG has seen a further increase in the development and usage of this particular version of Citcom. Such improvements have primarily been concerned with the introduction of new physics (rheology, compressibility), coupling with other software, including additional geologically relevant input/outputs and improved portability. Today, CitcomS is routinely used to solve mantle convection and subduction models with approximately one hundred million unknowns on large distributed memory clusters. Many advances have been made in both numerical linear algebra and the software encapsulating these concepts since the time of the original Citcom, however, the solver used by all Citcom software has remained largely unchanged from the original version. Incorporating modern techniques into CitcomS has the potential to greatly improve the flexibility and robustness of the method used to solve the underlying saddle point problem. Here we describe how PETSc (www.mcs.anl.gov/petsc), a flexible linear algebra package, has been integrated into CitcomS in a non-invasive fashion which i) preserves all the pre-existing functionality and ii) enables a rich infrastructure of preconditioned Krylov methods to be used to solve the discrete Stokes flow problem. The "extension" of the solver capabilities in CitcomS has prompted this version to be referred to as CitcomSX. We demonstrate the advantages of CitcomSX by comparing the convergence rate and solution time of the new Stokes solver with the original CitcomS approach. The BFBt preconditioner we utilise is robust and yields convergence
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
Subspace learning for Mumford-Shah-model-based texture segmentation through texture patches.
Law, Yan Nei; Lee, Hwee Kuan; Yip, Andy M
2011-07-20
In this paper, we develop a robust and effective algorithm for texture segmentation and feature selection. The approach is to incorporate a patch-based subspace learning technique into the subspace Mumford-Shah (SMS) model to make the minimization of the SMS model robust and accurate. The proposed method is fully unsupervised in that it removes the need to specify training data, which is required by existing methods for the same model. We further propose a novel (to our knowledge) pairwise dissimilarity measure for pixels. Its novelty lies in the use of the relevance scores of the features of each pixel to improve its discriminating power. Some superior results are obtained compared to existing unsupervised algorithms, which do not use a subspace approach. This confirms the usefulness of the subspace approach and the proposed unsupervised algorithm.
Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces
Chen, B Y; Lemmond, T D; Hanley, W G
2008-10-06
This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; Bremer, P. -T.; Pascucci, V.
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.; Bremer, Peer -Timo; Pascucci, Valerio
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Quantum Recurrence of a Subspace and Operator-Valued Schur Functions
NASA Astrophysics Data System (ADS)
Bourgain, J.; Grünbaum, F. A.; Velázquez, L.; Wilkening, J.
2014-08-01
A notion of monitored recurrence for discrete-time quantum processes was recently introduced in Grünbaum et al. (Commun Math Phys (2), 320:543-569,
Universal quantum computation in decoherence-free subspaces with hot trapped ions
Aolita, Leandro; Davidovich, Luiz; Kim, Kihwan; Haeffner, Hartmut
2007-05-15
We consider interactions that generate a universal set of quantum gates on logical qubits encoded in a collective-dephasing-free subspace, and discuss their implementations with trapped ions. This allows for the removal of the by-far largest source of decoherence in current trapped-ion experiments, collective dephasing. In addition, an explicit parametrization of all two-body Hamiltonians able to generate such gates without the system's state ever exiting the protected subspace is provided.
40 CFR 1065.516 - Sample system decontamination and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Cycles § 1065.516 Sample system decontamination and preconditioning. This section describes how to manage... purified air or nitrogen. (3) When calculating zero emission levels, apply all applicable...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for... manufacturer has concerns about fuel effects on adaptive memory systems, a manufacturer may precondition a test...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for... manufacturer has concerns about fuel effects on adaptive memory systems, a manufacturer may precondition a test...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for... manufacturer has concerns about fuel effects on adaptive memory systems, a manufacturer may precondition a test...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for... manufacturer has concerns about fuel effects on adaptive memory systems, a manufacturer may precondition a test...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for... manufacturer has concerns about fuel effects on adaptive memory systems, a manufacturer may precondition a test...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the... be sealed. (f) The boat must be keel down in the water. (g) The boat must be swamped, allowing...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the... be sealed. (f) The boat must be keel down in the water. (g) The boat must be swamped, allowing...
Subspace learning of dynamics on a shape manifold: a generative modeling approach.
Yi, Sheng; Krim, Hamid
2014-11-01
In this paper, we propose a novel subspace learning algorithm of shape dynamics. Compared to the previous works, our method is invertible and better characterizes the nonlinear geometry of a shape manifold while retaining a good computational efficiency. In this paper, using a parallel moving frame on a shape manifold, each path of shape dynamics is uniquely represented in a subspace spanned by the moving frame, given an initial condition (the starting point and starting frame). Mathematically, such a representation may be formulated as solving a manifold-valued differential equation, which provides a generative modeling of high-dimensional shape dynamics in a lower dimensional subspace. Given the parallelism and a path on a shape manifold, the parallel moving frame along the path is uniquely determined up to the choice of the starting frame. With an initial frame, we minimize the reconstruction error from the subspace to shape manifold. Such an optimization characterizes well the Riemannian geometry of the manifold by imposing parallelism (equivalent as a Riemannian metric) constraints on the moving frame. The parallelism in this paper is defined by a Levi-Civita connection, which is consistent with the Riemannian metric of the shape manifold. In the experiments, the performance of the subspace learning is extensively evaluated using two scenarios: 1) how the high dimensional geometry is characterized in the subspace and 2) how the reconstruction compares with the original shape dynamics. The results demonstrate and validate the theoretical advantages of the proposed approach.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Efficient variational Bayesian approximation method based on subspace optimization.
Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas
2015-02-01
Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time.
Supervised orthogonal discriminant subspace projects learning for face recognition.
Chen, Yu; Xu, Xiao-Hong
2014-02-01
In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bohmian dynamics on subspaces using linearized quantum force.
Rassolov, Vitaly A; Garashchuk, Sophya
2004-04-15
In the de Broglie-Bohm formulation of quantum mechanics the time-dependent Schrodinger equation is solved in terms of quantum trajectories evolving under the influence of quantum and classical potentials. For a practical implementation that scales favorably with system size and is accurate for semiclassical systems, we use approximate quantum potentials. Recently, we have shown that optimization of the nonclassical component of the momentum operator in terms of fitting functions leads to the energy-conserving approximate quantum potential. In particular, linear fitting functions give the exact time evolution of a Gaussian wave packet in a locally quadratic potential and can describe the dominant quantum-mechanical effects in the semiclassical scattering problems of nuclear dynamics. In this paper we formulate the Bohmian dynamics on subspaces and define the energy-conserving approximate quantum potential in terms of optimized nonclassical momentum, extended to include the domain boundary functions. This generalization allows a better description of the non-Gaussian wave packets and general potentials in terms of simple fitting functions. The optimization is performed independently for each domain and each dimension. For linear fitting functions optimal parameters are expressed in terms of the first and second moments of the trajectory distribution. Examples are given for one-dimensional anharmonic systems and for the collinear hydrogen exchange reaction.
Independent vector analysis using subband and subspace nonlinearity
NASA Astrophysics Data System (ADS)
Na, Yueyue; Yu, Jian; Chai, Bianfang
2013-12-01
Independent vector analysis (IVA) is a recently proposed technique, an application of which is to solve the frequency domain blind source separation problem. Compared with the traditional complex-valued independent component analysis plus permutation correction approach, the largest advantage of IVA is that the permutation problem is directly addressed by IVA rather than resorting to the use of an ad hoc permutation resolving algorithm after a separation of the sources in multiple frequency bands. In this article, two updates for IVA are presented. First, a novel subband construction method is introduced, IVA will be conducted in subbands from high frequency to low frequency rather than in the full frequency band, the fact that the inter-frequency dependencies in subbands are stronger allows a more efficient approach to the permutation problem. Second, to improve robustness and against noise, the IVA nonlinearity is calculated only in the signal subspace, which is defined by the eigenvector associated with the largest eigenvalue of the signal correlation matrix. Different experiments were carried out on a software suite developed by us, and dramatic performance improvements were observed using the proposed methods. Lastly, as an example of real-world application, IVA with the proposed updates was used to separate vibration components from high-speed train noise data.
Steganalysis in high dimensions: fusing classifiers built on random subspaces
NASA Astrophysics Data System (ADS)
Kodovský, Jan; Fridrich, Jessica
2011-02-01
By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art.
Randomized Subspace Learning for Proline Cis-Trans Isomerization Prediction.
Al-Jarrah, Omar Y; Yoo, Paul D; Taha, Kamal; Muhaidat, Sami; Shami, Abdallah; Zaki, Nazar
2015-01-01
Proline residues are common source of kinetic complications during folding. The X-Pro peptide bond is the only peptide bond for which the stability of the cis and trans conformations is comparable. The cis-trans isomerization (CTI) of X-Pro peptide bonds is a widely recognized rate-limiting factor, which can not only induces additional slow phases in protein folding but also modifies the millisecond and sub-millisecond dynamics of the protein. An accurate computational prediction of proline CTI is of great importance for the understanding of protein folding, splicing, cell signaling, and transmembrane active transport in both the human body and animals. In our earlier work, we successfully developed a biophysically motivated proline CTI predictor utilizing a novel tree-based consensus model with a powerful metalearning technique and achieved 86.58 percent Q2 accuracy and 0.74 Mcc, which is a better result than the results (70-73 percent Q2 accuracies) reported in the literature on the well-referenced benchmark dataset. In this paper, we describe experiments with novel randomized subspace learning and bootstrap seeding techniques as an extension to our earlier work, the consensus models as well as entropy-based learning methods, to obtain better accuracy through a precise and robust learning scheme for proline CTI prediction.
Controllable subspace of edge dynamics in complex networks
NASA Astrophysics Data System (ADS)
Pang, Shao-Peng; Hao, Fei
2017-09-01
For the edge dynamics in some real networks, it may be neither feasible nor necessary to be fully controlled. An accompanying issue is that, when the external signal is applied to a few nodes or even a single node, how many edges can be controlled? In this paper, for the edge dynamics system, we propose a theoretical framework to determine the controllable subspace and calculate its generic dimension based on the integer linear programming. This framework allows us not only to analyze the control centrality, i.e., the ability of a node to control, but also to uncover the controllable centrality, i.e., the propensity of an edge to be controllable. The simulation results and analytic calculation show that dense and homogeneous networks tend to have larger control centrality of nodes and controllable centrality of edges, but the negatively correlated in- and out-degrees of nodes or edges can reduce the two centrality. The positive correlation between the control centrality of node and its out-degree leads to that the distribution of control centrality, instead of that of controllable centrality, is encoded by the out-degree distribution of networks. Meanwhile, the positive correlation indicates that the nodes with high out-degree tend to play more important roles in control.
A Multifaceted Independent Performance Analysis of Facial Subspace Recognition Algorithms
Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan
2013-01-01
Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)2PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration. PMID:23451054
Pattern recognition using maximum likelihood estimation and orthogonal subspace projection
NASA Astrophysics Data System (ADS)
Islam, M. M.; Alam, M. S.
2006-08-01
Hyperspectral sensor imagery (HSI) is a relatively new area of research, however, it is extensively being used in geology, agriculture, defense, intelligence and law enforcement applications. Much of the current research focuses on the object detection with low false alarm rate. Over the past several years, many object detection algorithms have been developed which include linear detector, quadratic detector, adaptive matched filter etc. In those methods the available data cube was directly used to determine the background mean and the covariance matrix, assuming that the number of object pixels is low compared to that of the data pixels. In this paper, we have used the orthogonal subspace projection (OSP) technique to find the background matrix from the given image data. Our algorithm consists of three parts. In the first part, we have calculated the background matrix using the OSP technique. In the second part, we have determined the maximum likelihood estimates of the parameters. Finally, we have considered the likelihood ratio, commonly known as the Neyman Pearson quadratic detector, to recognize the objects. The proposed technique has been investigated via computer simulation where excellent performance has been observed.
The Galvanotactic Migration of Keratinocytes is Enhanced by Hypoxic Preconditioning
Guo, Xiaowei; Jiang, Xupin; Ren, Xi; Sun, Huanbo; Zhang, Dongxia; Zhang, Qiong; Zhang, Jiaping; Huang, Yuesheng
2015-01-01
The endogenous electric field (EF)-directed migration of keratinocytes (galvanotaxis) into wounds is an essential step in wound re-epithelialization. Hypoxia, which occurs immediately after injury, acts as an early stimulus to initiate the healing process; however, the mechanisms for this effect, remain elusive. We show here that the galvanotactic migration of keratinocytes was enhanced by hypoxia preconditioning as a result of the increased directionality rather than the increased motility of keratinocytes. This enhancement was both oxygen tension- and preconditioning time-dependent, with the maximum effects achieved using 2% O2 preconditioning for 6 hours. Hypoxic preconditioning (2% O2, 6 hours) decreased the threshold voltage of galvanotaxis to < 25 mV/mm, whereas this value was between 25 and 50 mV/mm in the normal culture control. In a scratch-wound monolayer assay in which the applied EF was in the default healing direction, hypoxic preconditioning accelerated healing by 1.38-fold compared with the control conditions. Scavenging of the induced ROS by N-acetylcysteine (NAC) abolished the enhanced galvanotaxis and the accelerated healing by hypoxic preconditioning. Our data demonstrate a novel and unsuspected role of hypoxia in supporting keratinocyte galvanotaxis. Enhancing the galvanotactic response of cells might therefore be a clinically attractive approach to induce improved wound healing. PMID:25988491
Hypoxic preconditioning facilitates acclimatization to hypobaric hypoxia in rat heart.
Singh, Mrinalini; Shukla, Dhananjay; Thomas, Pauline; Saxena, Saurabh; Bansal, Anju
2010-12-01
Acute systemic hypoxia induces delayed cardioprotection against ischaemia-reperfusion injury in the heart. As cobalt chloride (CoCl₂) is known to elicit hypoxia-like responses, it was hypothesized that this chemical would mimic the preconditioning effect and facilitate acclimatization to hypobaric hypoxia in rat heart. Male Sprague-Dawley rats treated with distilled water or cobalt chloride (12.5 mg Co/kg for 7 days) were exposed to simulated altitude at 7622 m for different time periods (1, 2, 3 and 5 days). Hypoxic preconditioning with cobalt appreciably attenuated hypobaric hypoxia-induced oxidative damage as observed by a decrease in free radical (reactive oxygen species) generation, oxidation of lipids and proteins. Interestingly, the observed effect was due to increased expression of the antioxidant proteins hemeoxygenase and metallothionein, as no significant change was observed in antioxidant enzyme activity. Hypoxic preconditioning with cobalt increased hypoxia-inducible factor 1α (HIF-1α) expression as well as HIF-1 DNA binding activity, which further resulted in increased expression of HIF-1 regulated genes such as erythropoietin, vascular endothelial growth factor and glucose transporter. A significant decrease was observed in lactate dehydrogenase activity and lactate levels in the heart of preconditioned animals compared with non-preconditioned animals exposed to hypoxia. The results showed that hypoxic preconditioning with cobalt induces acclimatization by up-regulation of hemeoxygenase 1 and metallothionein 1 via HIF-1 stabilization. © 2010 The Authors. JPP © 2010 Royal Pharmaceutical Society of Great Britain.
Singh, Amritpal; Randhawa, Puneet Kaur; Bali, Anjana; Singh, Nirmal; Jaggi, Amteshwar Singh
2017-02-14
The cardioprotective effects of remote hind limb preconditioning (RIPC) are well known, but mechanisms by which protection occurs still remain to be explored. Therefore, the present study was designed to investigate the role of TRPV and CGRP in adenosine and remote preconditioning-induced cardioprotection, using sumatriptan, a CGRP release inhibitor and ruthenium red, a TRPV inhibitor, in rats. For remote preconditioning, a pressure cuff was tied around the hind limb of the rat and was inflated with air up to 150 mmHg to produce ischemia in the hind limb and during reperfusion pressure was released. Four cycles of ischemia and reperfusion, each consisting of 5 min of inflation and 5 min of deflation of pressure cuff were used to produce remote limb preconditioning. An ex vivo Langendorff's isolated rat heart model was used to induce ischemia reperfusion injury by 30 min of global ischemia followed by 120 min of reperfusion. RIPC demonstrated a significant decrease in ischemia reperfusion-induced significant myocardial injury in terms of increase in LDH, CK, infarct size and decrease in LVDP, +dp/dtmax and -dp/dtmin. Moreover, pharmacological preconditioning with adenosine produced cardioprotective effects in a similar manner to RIPC. Pretreatment with sumatriptan, a CGRP release blocker, abolished RIPC and adenosine preconditioning-induced cardioprotective effects. Administration of ruthenium red, a TRPV inhibitor, also abolished adenosine preconditioning-induced cardioprotection. It may be proposed that the cardioprotective effects of adenosine and remote preconditioning are possibly mediated through activation of a TRPV channels and consequent, release of CGRP.
Responsive corneosurfametry following in vivo skin preconditioning.
Uhoda, E; Goffin, V; Pierard, G E
2003-12-01
Skin is subjected to many environmental threats, some of which altering the structure and function of the stratum corneum. Among them, surfactants are recognized factors that may influence irritant contact dermatitis. The present study was conducted to compare the variations in skin capacitance and corneosurfametry (CSM) reactivity before and after skin exposure to repeated subclinical injuries by 2 hand dishwashing liquids. A forearm immersion test was performed on 30 healthy volunteers. 2 daily soak sessions were performed for 5 days. At inclusion and the day following the last soak session, skin capacitance was measured and cyanoacrylate skin-surface strippings were harvested. The latter specimens were used for the ex vivo microwave CSM. Both types of assessments clearly differentiated the 2 hand dishwashing liquids. The forearm immersion test allowed the discriminant sensitivity of CSM to increase. Intact skin capacitance did not predict CSM data. By contrast, a significant correlation was found between the post-test conductance and the corresponding CSM data. In conclusion, a forearm immersion test under realistic conditions can discriminate the irritation potential between surfactant-based products by measuring skin conductance and performing CSM. In vivo skin preconditioning by surfactants increases CSM sensitivity to the same surfactants.
A Weakest Precondition Approach to Robustness
NASA Astrophysics Data System (ADS)
Balliu, Musard; Mastroeni, Isabella
With the increasing complexity of information management computer systems, security becomes a real concern. E-government, web-based financial transactions or military and health care information systems are only a few examples where large amount of information can reside on different hosts distributed worldwide. It is clear that any disclosure or corruption of confidential information in these contexts can result fatal. Information flow controls constitute an appealing and promising technology to protect both data confidentiality and data integrity. The certification of the security degree of a program that runs in untrusted environments still remains an open problem in the area of language-based security. Robustness asserts that an active attacker, who can modify program code in some fixed points (holes), is unable to disclose more private information than a passive attacker, who merely observes unclassified data. In this paper, we extend a method recently proposed for checking declassified non-interference in presence of passive attackers only, in order to check robustness by means of weakest precondition semantics. In particular, this semantics simulates the kind of analysis that can be performed by an attacker, i.e., from public output towards private input. The choice of semantics allows us to distinguish between different attacks models and to characterize the security of applications in different scenarios.
Preconditioning and postconditioning: new strategies for cardioprotection.
Hausenloy, D J; Yellon, D M
2008-06-01
Despite optimal therapy, the morbidity and mortality of coronary heart disease (CHD) remains significant, particularly in patients with diabetes or the metabolic syndrome. New strategies for cardioprotection are therefore required to improve the clinical outcomes in patients with CHD. Ischaemic preconditioning (IPC) as a cardioprotective strategy has not fulfilled it clinical potential, primarily because of the need to intervene before the index ischaemic event, which is impossible to predict in patients presenting with an acute myocardial infarction (AMI). However, emerging studies suggest that IPC-induced protection is mediated in part by signalling transduction pathways recruited at time of myocardial reperfusion, creating the possibility of harnessing its cardioprotective potential by intervening at time of reperfusion. In this regard, the recently described phenomenon of ischaemic postconditioning (IPost) has attracted great interest, particularly as it represents an intervention, which can be applied at time of myocardial reperfusion for patients presenting with an AMI. Interestingly, the signal transduction pathways, which underlie its protection, are similar to those recruited by IPC, creating a potential common cardioprotective pathway, which can be recruited at time of myocardial reperfusion, through the use of appropriate pharmacological agents given as adjuvant therapy to current myocardial reperfusion strategies such as thrombolysis and primary percutaneous coronary intervention for patients presenting with an AMI. This article provides a brief overview of IPC and IPost and describes the common signal transduction pathway they both appear to recruit at time of myocardial reperfusion, the pharmacological manipulation of which has the potential to generate new strategies for cardioprotection.
Joseph, Ilon
2014-05-27
Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties
Sila, Andrew M.; Shepherd, Keith D.; Pokhariyal, Ganesh P.
2016-01-01
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky–Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries. PMID:27110048
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Aliabadi, Saeid; Wang, Yuanyuan; Yu, Jinhua; Zhao, Jinxin; Guo, Wei; Zhang, Shun
2016-11-24
The Eigenspace-based beamformers, by orthogonal projection of signal subspace, can remove a large part of the noise, and provide better imaging contrast upon the minimum variance beamformer. However, wrong estimate of signal and noise component may bring dark-spot artifacts and distort the signal intensity. The signal component and noise and interference components are considered uncorrelated in conventional eigenspace-based beamforming methods. In ultrasound imaging, however, signal and noise are highly correlated. Therefore, the oblique projection instead of orthogonal projection should be taken into account in the denoising procedure of eigenspace-based beamforming algorithm. In this paper, we propose a novel eigenspace-based beamformer based on the oblique subspace projection that allows for consideration of the signal and noise correlation. Signal-to-interference-pulse-noise ratio and an eigen-decomposing scheme are investigated to propose a new signal and noise subspaces identification. To calculate the beamformer weights, the minimum variance weight vector is projected onto the signal subspace along the noise subspace via an oblique projection matrix. We have assessed the performance of proposed beamformer by using both simulated software and real data from Verasonics system. The results have exhibited the improved imaging qualities of the proposed beamformer in terms of imaging resolution, speckle preservation, imaging contrast, and dynamic range. Results have shown that, in ultrasound imaging, oblique projection is more sensible and effective than orthogonal subspace projection. Better signal and speckle preservation could be obtained by oblique projection compare to orthogonal projection. Also shadowing artifacts around the hyperechoic targets have been eliminated. Implementation the new subspace identification has enhanced the imaging resolution of the minimum variance beamformer due to the increasing the signal power in direction of arrival. Also it has
Ischemic preconditioning of one forearm enhances static and dynamic apnea.
Kjeld, Thomas; Rasmussen, Mads Reinholdt; Jattu, Timo; Nielsen, Henning Bay; Secher, Niels Henry
2014-01-01
Ischemic preconditioning enhances ergometer cycling and swimming performance. We evaluated whether ischemic preconditioning of one forearm (four times for 5 min) also affects static breath hold and underwater swimming, whereas the effect of similar preconditioning on ergometer rowing served as control because the warm-up for rowing regularly encompasses intense exercise and therefore reduced muscle oxygenation. Six divers performed a dry static breath hold, 11 divers swam underwater in an indoor pool, and 14 oarsmen rowed "1000 m" on an ergometer. Ischemic preconditioning reduced the forearm oxygen saturation from 65% ± 7% to 19% ± 7% (mean ± SD; P < 0.001), determined using spatially resolved near-infrared spectroscopy. During the breath hold (315 s, range = 280-375 s), forearm oxygenation decreased to 29% ± 10%; and in preparation for rowing, right thigh oxygenation decreased from 66% ± 7% to 33% ± 14% (P < 0.05). Ischemic preconditioning prolonged the breath hold from 279 ± 72 to 327 ± 39 s, and the underwater swimming distance from 110 ± 16 to 119 ± 14 m (P < 0.05) and also the rowing time was reduced (from 186.5 ± 3.6 to 185.7 ± 3.6 s; P < 0.05). We conclude that while the effect of ischemic preconditioning (of one forearm) on ergometer rowing was minimal, probably because of reduced muscle oxygenation during the warm-up, ischemic preconditioning does enhance both static and dynamic apnea, supporting that muscle ischemia is an important preparation for physical activity.
Preconditioning Strategies in Elastic Full Waveform Inversion.
NASA Astrophysics Data System (ADS)
Matharu, G.; Sacchi, M. D.
2016-12-01
Elastic full waveform inversion (FWI) is inherently more non-linear than its acoustic counterpart, a property that stems from the increased model space of the problem. Whereas acoustic media can be parametrized by density and P-wave velocity, visco-elastic media are parametrized by density, attenuation and 21 independent coefficients of the elastic tensor. Imposing assumptions of isotropy and perfect elasticity to simplify the physics, reduces the number of independent parameters required to characterize a medium. Isotropic, elastic media can be parametrized in terms of density and the Lamé parameters. The different parameters can exhibit trade-off that manifest as attributes in the data. In the context of FWI, this means that certain parameters cannot be uniquely resolved. An ideal model update in full waveform inversion is equivalent to a Newton step. Explicit computation of the Hessian and its inverse is not computationally feasible in elastic FWI. The inverse Hessian scales the gradients to account for trade-off between parameters as well as compensating for inadequate illumination related to source-receiver coverage. Gradient preconditioners can be applied to mimic the action of the inverse Hessian and partially correct for inaccuracies in the gradient. In this study, we investigate the effects of model reparametrization by recasting a regularized form of the least-squares waveform misfit into a preconditioned formulation. New model parameters are obtained by applying invertible weighting matrices to the model vector. The weighting matrices are related to estimates of the prior model covariance matrix and incorporate information about spatially variant correlations of model parameters as well as correlations between independent parameters. We compare the convergence of conventional FWI to FWI after model reparametrization.
Exercise preconditioning improves traumatic brain injury outcomes.
Taylor, Jordan M; Montgomery, Mitchell H; Gregory, Eugene J; Berman, Nancy E J
2015-10-05
To determine whether 6 weeks of exercise performed prior to traumatic brain injury (TBI) could improve post-TBI behavioral outcomes in mice, and if exercise increases neuroprotective molecules (vascular endothelial growth factor-A [VEGF-A], erythropoietin [EPO], and heme oxygenase-1 [HO-1]) in brain regions responsible for movement (sensorimotor cortex) and memory (hippocampus). 120 mice were randomly assigned to one of four groups: (1) no exercise+no TBI (NOEX-NOTBI [n=30]), (2) no exercise+TBI (NOEX-TBI [n=30]), (3) exercise+no TBI (EX-NOTBI [n=30]), and (4) exercise+TBI (EX-TBI [n=30]). The gridwalk task and radial arm water maze were used to evaluate sensorimotor and cognitive function, respectively. Quantitative real time polymerase chain reaction and immunostaining were performed to investigate VEGF-A, EPO, and HO-1 mRNA and protein expression in the right cerebral cortex and ipsilateral hippocampus. EX-TBI mice displayed reduced post-TBI sensorimotor and cognitive deficits when compared to NOEX-TBI mice. EX-NOTBI and EX-TBI mice showed elevated VEGF-A and EPO mRNA in the cortex and hippocampus, and increased VEGF-A and EPO staining of sensorimotor cortex neurons 1 day post-TBI and/or post-exercise. EX-TBI mice also exhibited increased VEGF-A staining of hippocampal neurons 1 day post-TBI/post-exercise. NOEX-TBI mice demonstrated increased HO-1 mRNA in the cortex (3 days post-TBI) and hippocampus (3 and 7 days post-TBI), but HO-1 was not increased in mice that exercised. Improved TBI outcomes following exercise preconditioning are associated with increased expression of specific neuroprotective genes and proteins (VEGF-A and EPO, but not HO-1) in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.
Exercise Preconditioning Improves Traumatic Brain Injury Outcomes
Taylor, Jordan M.; Montgomery, Mitchell H.; Gregory, Eugene J.; Berman, Nancy E.J.
2015-01-01
Purpose To determine whether 6 weeks of exercise performed prior to traumatic brain injury (TBI) could improve post-TBI behavioral outcomes in mice, and if exercise increases neuroprotective molecules (vascular endothelial growth factor-A [VEGF-A], erythropoietin [EPO], and heme oxygenase-1 [HO-1]) in brain regions responsible for movement (sensorimotor cortex) and memory (hippocampus). Methods 120 mice were randomly assigned to one of four groups: 1) no exercise + no TBI (NOEX-NOTBI [n=30]), 2) no exercise + TBI (NOEX-TBI [n=30]), 3) exercise + no TBI (EX-NOTBI [n=30]), and 4) exercise + TBI (EX-TBI [n=30]). The gridwalk task and radial arm water maze were used to evaluate sensorimotor and cognitive function, respectively. Quantitative real time polymerase chain reaction and immunostaining were performed to investigate VEGF-A, EPO, and HO-1 mRNA and protein expression in the right cerebral cortex and ipsilateral hippocampus. Results EX-TBI mice displayed reduced post-TBI sensorimotor and cognitive deficits when compared to NOEX-TBI mice. EX-NOTBI and EX-TBI mice showed elevated VEGF-A and EPO mRNA in the cortex and hippocampus, and increased VEGF-A and EPO staining of sensorimotor cortex neurons 1 day post-TBI and/or post-exercise. EX-TBI mice also exhibited increased VEGF-A staining of hippocampal neurons 1 day post-TBI/post-exercise. NOEX-TBI mice demonstrated increased HO-1 mRNA in the cortex (3 days post-TBI) and hippocampus (3 and 7 days post-TBI), but HO-1 was not increased in mice that exercised. Conclusions Improved TBI outcomes following exercise preconditioning are associated with increased expression of specific neuroprotective genes and proteins (VEGF-A and EPO, but not HO-1) in the brain. PMID:26165153
Heat shock proteins, end effectors of myocardium ischemic preconditioning?
Guisasola, María Concepcion; Desco, Maria del Mar; Gonzalez, Fernanda Silvana; Asensio, Fernando; Dulin, Elena; Suarez, Antonio; Garcia Barreno, Pedro
2006-01-01
The purpose of this study was to investigate (1) whether ischemia-reperfusion increased the content of heat shock protein 72 (Hsp72) transcripts and (2) whether myocardial content of Hsp72 is increased by ischemic preconditioning so that they can be considered as end effectors of preconditioning. Twelve male minipigs (8 protocol, 4 sham) were used, with the following ischemic preconditioning protocol: 3 ischemia and reperfusion 5-minute alternative cycles and last reperfusion cycle of 3 hours. Initial and final transmural biopsies (both in healthy and ischemic areas) were taken in all animals. Heat shock protein 72 messenger ribonucleic acid (mRNA) expression was measured by a semiquantitative reverse transcriptase-polymerase chain reaction (RT-PCR) method using complementary DNA normalized against the housekeeping gene cyclophilin. The identification of heat shock protein 72 was performed by immunoblot. In our “classic” preconditioning model, we found no changes in mRNA hsp72 levels or heat shock protein 72 content in the myocardium after 3 hours of reperfusion. Our experimental model is valid and the experimental techniques are appropriate, but the induction of heat shock proteins 72 as end effectors of cardioprotection in ischemic preconditioning does not occur in the first hours after ischemia, but probably at least 24 hours after it, in the so-called “second protection window.” PMID:17009598
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Resveratrol exerts pharmacological preconditioning by activating PGC-1alpha.
Tan, Lan; Yu, Jin-Tai; Guan, Hua-Shi
2008-11-01
Resveratrol (RSV), a polyphenol phytoalexin abundantly found in grape skins and in wines, is currently the focus of intense research as a pharmacological preconditioning agent in kidney, heart, and brain from ischemic injury. However, the exact molecular mechanism of RSV preconditioning remains obscure. The data from current studies indicate that pharmacological preconditioning with RSV were attributed to its role as intracellular antioxidant, anti-inflammatory agent, its ability to induce nitric oxide synthase (NOS) expression, its ability to induce angiogenesis, and its ability to increases sirtuin 1 (SIRT1) activity. Peroxisome proliferators-activated receptor (PPAR) gamma co-activator-1alpha (PGC-1alpha) is a member of a family of transcription coactivators that owns mitochondrial biogenesis, antioxidation, growth factor signaling regulation, and angiogenesis activities. And, almost all the signaling pathways activated by RVS involve in PGC-1alpha activity. Moreover, it has been proofed that RVS could mediate an increase PGC-1alpha activity. These significant conditions support the hypothesis that RSV exerts pharmacological preconditioning by activating PGC-1alpha. Attempts to confirm this hypothesis will provide new directions in the study of pharmaceutical preconditioning and the development of new treatment approaches for reducing the extent of ischemia/reperfusion injury.
The divergent roles of autophagy in ischemia and preconditioning
Sheng, Rui; Qin, Zheng-hong
2015-01-01
Autophagy is an evolutionarily conserved and lysosome-dependent process for degrading and recycling cellular constituents. Autophagy is activated following an ischemic insult or preconditioning, but it may exert dual roles in cell death or survival during these two processes. Preconditioning or lethal ischemia may trigger autophagy via multiple signaling pathways involving endoplasmic reticulum (ER) stress, AMPK/TSC/mTOR, Beclin 1/BNIP3/SPK2, and FoxO/NF-κB transcription factors, etc. Autophagy then interacts with apoptotic and necrotic signaling pathways to regulate cell death. Autophagy may also maintain cell function by removing protein aggregates or damaged mitochondria. To date, the dual roles of autophagy in ischemia and preconditioning have not been fully clarified. The purpose of the present review is to summarize the recent progress in the mechanisms underlying autophagy activation during ischemia and preconditioning. A better understanding of the dual effects of autophagy in ischemia and preconditioning could help to develop new strategies for the preventive treatment of ischemia. PMID:25832421
Xenon preconditioning reduces brain damage from neonatal asphyxia in rats.
Ma, Daqing; Hossain, Mahmuda; Pettet, Garry K J; Luo, Yan; Lim, Ta; Akimov, Stanislav; Sanders, Robert D; Franks, Nicholas P; Maze, Mervyn
2006-02-01
Xenon attenuates on-going neuronal injury in both in vitro and in vivo models of hypoxic-ischaemic injury when administered during and after the insult. In the present study, we sought to investigate whether the neuroprotective efficacy of xenon can be observed when administered before an insult, referred to as 'preconditioning'. In a neuronal-glial cell coculture, preexposure to xenon for 2 h caused a concentration-dependent reduction of lactate dehydrogenase release from cells deprived of oxygen and glucose 24 h later; xenon's preconditioning effect was abolished by cycloheximide, a protein synthesis inhibitor. Preconditioning with xenon decreased propidium iodide staining in a hippocampal slice culture model subjected to oxygen and glucose deprivation. In an in vivo model of neonatal asphyxia involving hypoxic-ischaemic injury to 7-day-old rats, preconditioning with xenon reduced infarction size when assessed 7 days after injury. Furthermore, a sustained improvement in neurologic function was also evident 30 days after injury. Phosphorylated cAMP (cyclic adenosine 3',5'-monophosphate)-response element binding protein (pCREB) was increased by xenon exposure. Also, the prosurvival proteins Bcl-2 and brain-derived neurotrophic factor were upregulated by xenon treatment. These studies provide evidence for xenon's preconditioning effect, which might be caused by a pCREB-regulated synthesis of proteins that promote survival against neuronal injury.
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Parks, Geoffrey T.; Chen, Xiaoqian; Seshadri, Pranay
2016-03-01
Uncertainty quantification has recently been receiving much attention from aerospace engineering community. With ever-increasing requirements for robustness and reliability, it is crucial to quantify multidisciplinary uncertainty in satellite system design which dominates overall design direction and cost. However, coupled multi-disciplines and cross propagation hamper the efficiency and accuracy of high-dimensional uncertainty analysis. In this study, an uncertainty quantification methodology based on active subspaces is established for satellite conceptual design. The active subspace effectively reduces the dimension and measures the contributions of input uncertainties. A comprehensive characterization of associated uncertain factors is made and all subsystem models are built for uncertainty propagation. By integrating a system decoupling strategy, the multidisciplinary uncertainty effect is efficiently represented by a one-dimensional active subspace for each design. The identified active subspace is checked by bootstrap resampling for confidence intervals and verified by Monte Carlo propagation for the accuracy. To show the performance of active subspaces, 18 uncertainty parameters of an Earth observation small satellite are exemplified and then another 5 design uncertainties are incorporated. The uncertainties that contribute the most to satellite mass and total cost are ranked, and the quantification of high-dimensional uncertainty is achieved by a relatively small number of support samples. The methodology with considerably less cost exhibits high accuracy and strong adaptability, which provides a potential template to tackle multidisciplinary uncertainty in practical satellite systems.
Fast clustering in linear 1D subspaces: segmentation of microscopic image of unstained specimens
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Brbić, Maria; Tolić, Dijana; Antulov-Fantulin, Nino; Chen, Xinjian
2017-03-01
Algorithms for subspace clustering (SC) are effective in terms of the accuracy but exhibit high computational complexity. We propose algorithm for SC of (highly) similar data points drawn from union of linear one-dimensional subspaces that are possibly dependent in the input data space. The algorithm finds a dictionary that represents data in reproducible kernel Hilbert space (RKHS). Afterwards, data are projected into RKHS by using empirical kernel map (EKM). Due to dimensionality expansion effect of the EKM one-dimensional subspaces become independent in RKHS. Segmentation into subspaces is realized by applying the max operator on projected data which yields the computational complexity of the algorithm that is linear in number of data points. We prove that for noise free data proposed approach yields exact clustering into subspaces. We also prove that EKM-based projection yields less correlated data points. Due to nonlinear projection, the proposed method can adopt to linearly nonseparable data points. We demonstrate accuracy and computational efficiency of the proposed algorithm on synthetic dataset as well as on segmentation of the image of unstained specimen in histopathology.
Single-shot realization of nonadiabatic holonomic quantum gates in decoherence-free subspaces
NASA Astrophysics Data System (ADS)
Zhao, P. Z.; Xu, G. F.; Ding, Q. M.; Sjöqvist, Erik; Tong, D. M.
2017-06-01
Nonadiabatic holonomic quantum computation in decoherence-free subspaces has attracted increasing attention recently, as it allows for high-speed implementation and combines both the robustness of holonomic gates and the coherence stabilization of decoherence-free subspaces. Since the first protocol of nonadiabatic holonomic quantum computation in decoherence-free subspaces, a number of schemes for its physical implementation have been put forward. However, all previous schemes require two noncommuting gates to realize an arbitrary one-qubit gate, which doubles the exposure time of gates to error sources as well as the resource expenditure. In this paper, we propose an alternative protocol for nonadiabatic holonomic quantum computation in decoherence-free subspaces, in which an arbitrary one-qubit gate in decoherence-free subspaces is realized by a single-shot implementation. The present protocol not only maintains the merits of the original protocol but also avoids the extra work of combining two gates to implement an arbitrary one-qubit gate and thereby reduces the exposure time to various error sources.
Unification of algorithms for minimum mode optimization
NASA Astrophysics Data System (ADS)
Zeng, Yi; Xiao, Penghao; Henkelman, Graeme
2014-01-01
Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.
Unification of algorithms for minimum mode optimization.
Zeng, Yi; Xiao, Penghao; Henkelman, Graeme
2014-01-28
Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.
Gpu Implementation of Preconditioning Method for Low-Speed Flows
NASA Astrophysics Data System (ADS)
Zhang, Jiale; Chen, Hongquan
2016-06-01
An improved preconditioning method for low-Mach-number flows is implemented on a GPU platform. The improved preconditioning method employs the fluctuation of the fluid variables to weaken the influence of accuracy caused by the truncation error. The GPU parallel computing platform is implemented to accelerate the calculations. Both details concerning the improved preconditioning method and the GPU implementation technology are described in this paper. Then a set of typical low-speed flow cases are simulated for both validation and performance analysis of the resulting GPU solver. Numerical results show that dozens of times speedup relative to a serial CPU implementation can be achieved using a single GPU desktop platform, which demonstrates that the GPU desktop can serve as a cost-effective parallel computing platform to accelerate CFD simulations for low-Speed flows substantially.
Peptide Nanofibers Preconditioned with Stem Cell Secretome Are Renoprotective
Wang, Yin; Bakota, Erica; Chang, Benny H.J.; Entman, Mark; Hartgerink, Jeffrey D.
2011-01-01
Stem cells may contribute to renal recovery following acute kidney injury, and this may occur through their secretion of cytokines, chemokines, and growth factors. Here, we developed an acellular, nanofiber-based preparation of self-assembled peptides to deliver the secretome of embryonic stem cells (ESCs). Using an integrated in vitro and in vivo approach, we found that nanofibers preconditioned with ESCs could reverse cell hyperpermeability and apoptosis in vitro and protect against lipopolysaccharide-induced acute kidney injury in vivo. The renoprotective effect of preconditioned nanofibers associated with an attenuation of Rho kinase activation. We also observed that the combined presence of follistatin, adiponectin, and secretory leukoprotease during preconditioning was essential to the renoprotective properties of the nanofibers. In summary, we developed a designer-peptide nanofiber that can serve as a delivery platform for the beneficial effects of stem cells without the problems of teratoma formation or limited cell engraftment and viability. PMID:21415151
Operator-Based Preconditioning of Stiff Hyperbolic Systems
Reynolds, Daniel R.; Samtaney, Ravi; Woodward, Carol S.
2009-02-09
We introduce an operator-based scheme for preconditioning stiff components encoun- tered in implicit methods for hyperbolic systems of partial differential equations posed on regular grids. The method is based on a directional splitting of the implicit operator, followed by a char- acteristic decomposition of the resulting directional parts. This approach allows for solution to any number of characteristic components, from the entire system to only the fastest, stiffness-inducing waves. We apply the preconditioning method to stiff hyperbolic systems arising in magnetohydro- dynamics and gas dynamics. We then present numerical results showing that this preconditioning scheme works well on problems where the underlying stiffness results from the interaction of fast transient waves with slowly-evolving dynamics, scales well to large problem sizes and numbers of processors, and allows for additional customization based on the specific problems under study.
Cortical spreading depression-induced preconditioning in the brain
Shen, Ping-ping; Hou, Shuai; Ma, Di; Zhao, Ming-ming; Zhu, Ming-qin; Zhang, Jing-dian; Feng, Liang-shu; Cui, Li; Feng, Jia-chun
2016-01-01
Cortical spreading depression is a technique used to depolarize neurons. During focal or global ischemia, cortical spreading depression-induced preconditioning can enhance tolerance of further injury. However, the underlying mechanism for this phenomenon remains relatively unclear. To date, numerous issues exist regarding the experimental model used to precondition the brain with cortical spreading depression, such as the administration route, concentration of potassium chloride, induction time, duration of the protection provided by the treatment, the regional distribution of the protective effect, and the types of neurons responsible for the greater tolerance. In this review, we focus on the mechanisms underlying cortical spreading depression-induced tolerance in the brain, considering excitatory neurotransmission and metabolism, nitric oxide, genomic reprogramming, inflammation, neurotropic factors, and cellular stress response. Specifically, we clarify the procedures and detailed information regarding cortical spreading depression-induced preconditioning and build a foundation for more comprehensive investigations in the field of neural regeneration and clinical application in the future. PMID:28123433
Liquid hydrogen turbopump rapid start program. [thermal preconditioning using coatings
NASA Technical Reports Server (NTRS)
Wong, G. S.
1973-01-01
This program was to analyze, test, and evaluate methods of achieving rapid-start of a liquid hydrogen feed system (inlet duct and turbopump) using a minimum of thermal preconditioning time and propellant. The program was divided into four tasks. Task 1 includes analytical studies of the testing conducted in the other three tasks. Task 2 describes the results from laboratory testing of coating samples and the successful adherence of a KX-635 coating to the internal surfaces of the feed system tested in Task 4. Task 3 presents results of testing an uncoated feed system. Tank pressure was varied to determine the effect of flowrate on preconditioning. The discharge volume and the discharge pressure which initiates opening of the discharge valve were varied to determine the effect on deadhead (no through-flow) start transients. Task 4 describes results of testing a similar, internally coated feed system and illustrates the savings in preconditioning time and propellant resulting from the coatings.
On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland W.
1992-01-01
The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.
Estimation of direction of arrival of a moving target using subspace based approaches
NASA Astrophysics Data System (ADS)
Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2016-05-01
In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.
Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger
2013-01-01
This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio.
A real-time cardiac surface tracking system using Subspace Clustering.
Singh, Vimal; Tewfik, Ahmed H; Gowreesunker, B
2010-01-01
Catheter based radio frequency ablation of atrial fibrillation requires real-time 3D tracking of cardiac surfaces with sub-millimeter accuracy. To best of our knowledge, there are no commercial or non-commercial systems capable to do so. In this paper, a system for high-accuracy 3D tracking of cardiac surfaces in real-time is proposed and results applied to a real patient dataset are presented. Proposed system uses Subspace Clustering algorithm to identify the potential deformation subspaces for cardiac surfaces during the training phase from pre-operative MRI scan based training set. In Tracking phase, using low-density outer cardiac surface samples, active deformation subspace is identified and complete inner & outer cardiac surfaces are reconstructed in real-time under a least squares formulation.
NASA Astrophysics Data System (ADS)
Zhao, P. Z.; Xu, G. F.; Tong, D. M.
2016-12-01
Nonadiabatic geometric quantum computation in decoherence-free subspaces has received increasing attention due to the merits of its high-speed implementation and robustness against both control errors and decoherence. However, all the previous schemes in this direction have been based on the conventional geometric phases, of which the dynamical phases need to be removed. In this paper, we put forward a scheme of nonadiabatic geometric quantum computation in decoherence-free subspaces based on unconventional geometric phases, of which the dynamical phases do not need to be removed. Specifically, by using three physical qubits undergoing collective dephasing to encode one logical qubit, we realize a universal set of geometric gates nonadiabatically and unconventionally. Our scheme not only maintains all the merits of nonadiabatic geometric quantum computation in decoherence-free subspaces, but also avoids the additional operations required in the conventional schemes to cancel the dynamical phases.
Preconditioning boosts regenerative programmes in the adult zebrafish heart
de Preux Charles, Anne-Sophie; Bise, Thomas; Baier, Felix; Sallin, Pauline; Jaźwińska, Anna
2016-01-01
During preconditioning, exposure to a non-lethal harmful stimulus triggers a body-wide increase of survival and pro-regenerative programmes that enable the organism to better withstand the deleterious effects of subsequent injuries. This phenomenon has first been described in the mammalian heart, where it leads to a reduction of infarct size and limits the dysfunction of the injured organ. Despite its important clinical outcome, the actual mechanisms underlying preconditioning-induced cardioprotection remain unclear. Here, we describe two independent models of cardiac preconditioning in the adult zebrafish. As noxious stimuli, we used either a thoracotomy procedure or an induction of sterile inflammation by intraperitoneal injection of immunogenic particles. Similar to mammalian preconditioning, the zebrafish heart displayed increased expression of cardioprotective genes in response to these stimuli. As zebrafish cardiomyocytes have an endogenous proliferative capacity, preconditioning further elevated the re-entry into the cell cycle in the intact heart. This enhanced cycling activity led to a long-term modification of the myocardium architecture. Importantly, the protected phenotype brought beneficial effects for heart regeneration within one week after cryoinjury, such as a more effective cell-cycle reentry, enhanced reactivation of embryonic gene expression at the injury border, and improved cell survival shortly after injury. This study reveals that exposure to antecedent stimuli induces adaptive responses that render the fish more efficient in the activation of the regenerative programmes following heart damage. Our results open a new field of research by providing the adult zebrafish as a model system to study remote cardiac preconditioning. PMID:27440423
Incomplete block SSOR preconditionings for high order discretizations
Kolotilina, L.
1994-12-31
This paper considers the solution of linear algebraic systems Ax = b resulting from the p-version of the Finite Element Method (FEM) using PCG iterations. Contrary to the h-version, the p-version ensures the desired accuracy of a discretization not by refining an original finite element mesh but by introducing higher degree polynomials as additional basis functions which permits to reduce the size of the resulting linear system as compared with the h-version. The suggested preconditionings are the so-called Incomplete Block SSOR (IBSSOR) preconditionings.
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Preconditioning for multiplexed imaging with spatially coded PSFs.
Horisaki, Ryoichi; Tanida, Jun
2011-06-20
We propose a preconditioning method to improve the convergence of iterative reconstruction algorithms in multiplexed imaging based on convolution-based compressive sensing with spatially coded point spread functions (PSFs). The system matrix is converted to improve the condition number with a preconditioner matrix. The preconditioner matrix is calculated by Tikhonov regularization in the frequency domain. The method was demonstrated with simulations and an experiment involving a range detection system with a grating based on the multiplexed imaging framework. The results of the demonstrations showed improved reconstruction fidelity by using the proposed preconditioning method.
Optimal bounds for solving tridiagonal systems with preconditioning
Zellini, P. )
1988-10-01
Let (1) Tx=f be a linear tridiagonal system system of n equations in the unknown x/sub 1/, ..., x/sub n/. It is proved that 3n-2 (nonscalar) multiplications/divisions are necessary to solve (1) in a straight-line program excluding divisions by elements of f. This bound is optimal if the cost of preconditioning of T is not counted. Analogous results are obtained in case (i) T is bidiagonal and (ii) T and f are both centrosymmetric. The existence of parallel algorithms to solve (1) with preconditioning and with minimal multiplicative redundancy is also discussed.
Hyperbaric oxygen preconditioning: a reliable option for neuroprotection
Hu, Qin; Manaenko, Anatol; Matei, Nathanael; Guo, Zhenni; Xu, Ting; Tang, Jiping; Zhang, John H.
2016-01-01
Brain injury is the leading cause of death and disability worldwide and clinically there is no effective therapy for neuroprotection. Hyperbaric oxygen preconditioning (HBO-PC) has been experimentally demonstrated to be neuroprotective in several models and has shown efficiency in patients undergoing on-pump coronary artery bypass graft (CABG) surgery. Compared with other preconditioning stimuli, HBO is benign and has clinically translational potential. In this review, we will summarize the results in experimental brain injury and clinical studies, elaborate the mechanisms of HBO-PC, and discuss regimes and opinions for future interventions in acute brain injury. PMID:27826420
Choice of Variables and Preconditioning for Time Dependent Problems
NASA Technical Reports Server (NTRS)
Turkel, Eli; Vatsa, Verr N.
2003-01-01
We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.
The Subspace Projected Approximate Matrix (SPAM) modification of the Davidson method
Shepard, R.; Tilson, J.L.; Wagner, A.F.; Minkoff, M.
1997-12-31
A modification of the Davidson subspace expansion method, a Ritz approach, is proposed in which the expansion vectors are computed from a {open_quotes}cheap{close_quotes} approximating eigenvalue equation. This approximate eigenvalue equation is assembled using projection operators constructed from the subspace expansion vectors. The method may be implemented using an inner/outer iteration scheme, or it may be implemented by modifying the usual Davidson algorithm in such a way that exact and approximate matrix-vector product computations are intersperced. A multi-level algorithm is proposed in which several levels of approximate matrices are used.
Preconditioning for Three-Dimensional Two-Fluid NIMROD Applications
NASA Astrophysics Data System (ADS)
Sovinec, C. R.; Held, E. D.
2008-11-01
A new parallel preconditioner implementation for Krylov-space matrix-solves in nonlinear 3D two-fluid simulations with the NIMROD code (nimrodteam.org) is presented. The implementation takes advantage of the typically small perturbation size. The large axisymmetric component provides diagonal dominance for matrices partitioned into Fourier-component blocks. An inner preconditioner iteration using limited Fourier-component coupling in Gauss-Seidel-like relaxation with diagonal blocks solved by SuperLU-DIST [Li and Demmel, ACM Trans. Math. Software 29, 110 (2003)] is then effective for the two-fluid magnetic advance, provided that a realistic level of electron inertia is used to limit R-mode frequencies. Generating and multiplying matrix elements for a limited number of off-diagonal blocks is shown to scale with processor number as the number of Fourier components is increased in research-relevant sawtooth and ELM computations.
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and outside of the boat, either over the sides, through a hull opening, or both. Entrapped air in the... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... and outside of the boat, either over the sides, through a hull opening, or both. Entrapped air in the... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2010 CFR
2010-07-01
... to flow between the inside and the outside of the boat, either over the sides, through a hull opening... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to flow between the inside and the outside of the boat, either over the sides, through a hull opening... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... to flow between the inside and the outside of the boat, either over the sides, through a hull opening... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the...
Improvement in computational fluid dynamics through boundary verification and preconditioning
NASA Astrophysics Data System (ADS)
Folkner, David E.
This thesis provides improvements to computational fluid dynamics accuracy and efficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary conditions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation boundary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional preconditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
Applying polynomial filtering to mass preconditioned Hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Haar, Taylor; Kamleh, Waseem; Zanotti, James; Nakamura, Yoshifumi
2017-06-01
The use of mass preconditioning or Hasenbusch filtering in modern Hybrid Monte Carlo simulations is common. At light quark masses, multiple filters (three or more) are typically used to reduce the cost of generating dynamical gauge fields; however, the task of tuning a large number of Hasenbusch mass terms is non-trivial. The use of short polynomial approximations to the inverse has been shown to provide an effective UV filter for HMC simulations. In this work we investigate the application of polynomial filtering to the mass preconditioned Hybrid Monte Carlo algorithm as a means of introducing many time scales into the molecular dynamics integration with a simplified parameter tuning process. A generalized multi-scale integration scheme that permits arbitrary step-sizes and can be applied to Omelyan-style integrators is also introduced. We find that polynomial-filtered mass-preconditioning (PF-MP) performs as well as or better than standard mass preconditioning, with significantly less fine tuning required.
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Vehicle preparation and...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Vehicle Preparation and Running a Test § 1066.407 Vehicle preparation and preconditioning. This section describes steps to take before measuring exhaust...
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Vehicle preparation and...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Vehicle Preparation and Running a Test § 1066.407 Vehicle preparation and preconditioning. This section describes steps to take before measuring exhaust...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) BOATING SAFETY BOATS AND ASSOCIATED EQUIPMENT Flotation Requirements for Outboard Boats Rated for Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... boat. (b) The boat must be loaded with a quantity of weight that, when submerged, is equal to the sum...
Preconditioning with cobalt chloride modifies pain perception in mice.
Alexa, Teodora; Luca, Andrei; Dondas, Andrei; Bohotin, Catalina Roxana
2015-04-01
Cobalt chloride (CoCl2) modifies mitochondrial permeability and has a hypoxic-mimetic effect; thus, the compound induces tolerance to ischemia and increases resistance to a number of injury types. The aim of the present study was to investigate the effects of CoCl2 hypoxic preconditioning for three weeks on thermonociception, somatic and visceral inflammatory pain, locomotor activity and coordination in mice. A significant pronociceptive effect was observed in the hot plate and tail flick tests after one and two weeks of CoCl2 administration, respectively (P<0.001). Thermal hyperalgesia (Plantar test) was present in the first week, but recovered by the end of the experiment. Contrary to the hyperalgesic effect on thermonociception, CoCl2 hypoxic preconditioning decreased the time spent grooming the affected area in the second phase of the formalin test on the orofacial and paw models. The first phase of formalin-induced pain and the writhing test were not affected by CoCl2 preconditioning. Thus, the present study demonstrated that CoCl2 preconditioning has a dual effect on pain, and these effects should be taken into account along with the better-known neuro-, cardio- and renoprotective effects of CoCl2.
40 CFR 86.232-94 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.232-94... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1994 and Later Model Year Gasoline-Fueled New Light-Duty Vehicles, New Light-Duty Trucks and New...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... be keel down in the water. (g) The boat must be swamped, allowing water to flow between the inside... flooded portion of the boat must be eliminated. (h) Water must flood the two largest air chambers and...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... be keel down in the water. (g) The boat must be swamped, allowing water to flow between the inside... flooded portion of the boat must be eliminated. (h) Water must flood the two largest air chambers and...
40 CFR 1066.405 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vehicle preparation and preconditioning. 1066.405 Section 1066.405 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Preparing Vehicles and Running an Exhaust Emission...
No influence of ischemic preconditioning on running economy.
Kaur, Gungeet; Binger, Megan; Evans, Claire; Trachte, Tiffany; Van Guilder, Gary P
2017-02-01
Many of the potential performance-enhancing properties of ischemic preconditioning suggest that the oxygen cost for a given endurance exercise workload will be reduced, thereby improving the economy of locomotion. The aim of this study was to identify whether ischemic preconditioning improves exercise economy in recreational runners. A randomized sham-controlled crossover study was employed in which 18 adults (age 27 ± 7 years; BMI 24.6 ± 3 kg/m(2)) completed two, incremental submaximal (65-85% VO2max) treadmill running protocols (3 × 5 min stages from 7.2-14.5 km/h) coupled with indirect calorimetry to assess running economy following ischemic preconditioning (3 × 5 min bilateral upper thigh ischemia) and sham control. Running economy was expressed as mlO2/kg/km and as the energy in kilocalories required to cover 1 km of horizontal distance (kcal/kg/km). Ischemic preconditioning did not influence steady-state heart rate, oxygen consumption, minute ventilation, respiratory exchange ratio, energy expenditure, and blood lactate. Likewise, running economy was similar (P = 0.647) between the sham (from 201.6 ± 17.7 to 204.0 ± 16.1 mlO2/kg/km) and ischemic preconditioning trials (from 202.8 ± 16.2 to 203.1 ± 15.6 mlO2/kg/km). There was no influence (P = 0.21) of ischemic preconditioning on running economy expressed as the caloric unit cost (from 0.96 ± 0.12 to 1.01 ± 0.11 kcal/kg/km) compared with sham (from 1.00 ± 0.10 to 1.00 ± 0.08 kcal/kg/km). The properties of ischemic preconditioning thought to affect exercise performance at vigorous to severe exercise intensities, which generate more extensive physiological challenge, are ineffective at submaximal workloads and, therefore, do not change running economy.
Time-derivative preconditioning method for multicomponent flow
NASA Astrophysics Data System (ADS)
Housman, Jeffrey Allen
A time-derivative preconditioned system of equations suitable for the numerical simulation of single component and multicomponent inviscid flows at all speeds is formulated. The system is shown to be hyperbolic in time and remain well-posed at low Mach numbers, allowing an efficient time marching solution strategy to be utilized from transonic to incompressible flow speeds. For multicomponent flow at low speed, a preconditioned nonconservative discretization scheme is described which preserves pressure and velocity equilibrium across fluid interfaces, handles sharp liquid/gas interfaces with large density ratios, while remaining well-conditioned for time marching methods. The method is then extended to transonic and supersonic flows using a hybrid conservative/nonconservative formulation which retains the pressure/velocity equilibrium property and converges to the correct weak solution when shocks are present. In order to apply the proposed model to complex flow applications, the overset grid methodology is used where the equations are transformed to a nonorthogonal curvilinear coordinate system and discretized on structured body-fitted curvilinear grids. The multicomponent model and its extension to homogeneous multiphase mixtures is discussed and the hyperbolicity of the governing equations is demonstrated. Low Mach number perturbation analysis is then performed on the system of equations and a local time-derivative preconditioning matrix is derived allowing time marching numerical methods to remain efficient at low speeds. Next, a particular time marching numerical method is presented along with three discretization schemes for the convective terms. These include a conservative preconditioned Roe type method, a nonconservative preconditioned Split Coefficient Matrix (SCM) method, and hybrid formulation which combines the conservative and nonconservative schemes using a simple switching function. A characteristic boundary treatment which includes time
Zhang Xinding; Zhang Qinghua; Wang, Z. D.
2006-09-15
We propose a feasible scheme to achieve holonomic quantum computation in a decoherence-free subspace (DFS) with trapped ions. By the application of appropriate bichromatic laser fields on the designated ions, we are able to construct two noncommutable single-qubit gates and one controlled-phase gate using the holonomic scenario in the encoded DFS.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Robust normal estimation of point cloud with sharp features via subspace clustering
NASA Astrophysics Data System (ADS)
Luo, Pei; Wu, Zhuangzhi; Xia, Chunhe; Feng, Lu; Jia, Bo
2014-01-01
Normal estimation is an essential step in point cloud based geometric processing, such as high quality point based rendering and surface reconstruction. In this paper, we present a clustering based method for normal estimation which preserves sharp features. For a piecewise smooth point cloud, the k-nearest neighbors of one point lie on a union of multiple subspaces. Given the PCA normals as input, we perform a subspace clustering algorithm to segment these subspaces. Normals are estimated by the points lying in the same subspace as the center point. In contrast to the previous method, we exploit the low-rankness of the input data, by seeking the lowest rank representation among all the candidates that can represent one normal as linear combinations of the others. Integration of Low-Rank Representation (LRR) makes our method robust to noise. Moreover, our method can simultaneously produce the estimated normals and the local structures which are especially useful for denoise and segmentation applications. The experimental results show that our approach successfully recovers sharp features and generates more reliable results compared with the state-of-theart.
Large-margin predictive latent subspace learning for multiview data analysis.
Chen, Ning; Zhu, Jun; Sun, Fuchun; Xing, Eric Poe
2012-12-01
Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
Cai, Xianfa; Wei, Jia; Wen, Guihua; Yu, Zhiwen
2014-03-01
Precise cancer classification is essential to the successful diagnosis and treatment of cancers. Although semisupervised dimensionality reduction approaches perform very well on clean datasets, the topology of the neighborhood constructed with most existing approaches is unstable in the presence of high-dimensional data with noise. In order to solve this problem, a novel local and global preserving semisupervised dimensionality reduction based on random subspace algorithm marked as RSLGSSDR, which utilizes random subspace for semisupervised dimensionality reduction, is proposed. The algorithm first designs multiple diverse graphs on different random subspace of datasets and then fuses these graphs into a mixture graph on which dimensionality reduction is performed. As themixture graph is constructed in lower dimensionality, it can ease the issues on graph construction on highdimensional samples such that it can hold complicated geometric distribution of datasets as the diversity of random subspaces. Experimental results on public gene expression datasets demonstrate that the proposed RSLGSSDR not only has superior recognition performance to competitive methods, but also is robust against a wide range of values of input parameters.
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
An analogue of the Littlewood-Paley theorem for orthoprojectors onto wavelet subspaces
NASA Astrophysics Data System (ADS)
Kudryavtsev, S. N.
2016-06-01
We prove an analogue of the Littlewood-Paley theorem for orthoprojectors onto wavelet subspaces corresponding to a non-isotropic multiresolution analysis generated by the tensor product of smooth scaling functions of one variable with sufficiently rapid decay at infinity.
Hesse, Christian W
2007-01-01
Accurate estimates of the dimension and an (orthogonal) basis of the signal subspace of noise corrupted multi-channel measurements are essential for accurate identification and extraction of any signals of interest within that subspace. For most biomedical signals comprising very large numbers of channels, including the magnetoencephalogram (MEG), the "true" number of underlying signals ¿ although ultimately unknown ¿ is unlikely to be of the same order as the number of measurements, and has to be estimated from the available data. This work examines several second-order statistical approaches to signal subspace (dimension) estimation with respect to their underlying assumptions and their performance in high-dimensional measurement spaces using 151-channel MEG data. The purpose is to identify which of these methods might be most appropriate for modeling the signal subspace structure of high-density MEG data recorded under controlled conditions, and what are the practical consequences with regard to the subsequent application of biophysical modeling and statistical source separation techniques.
Metabolomic Analysis of Two Different Models of Delayed Preconditioning
Bravo, Claudio; Kudej, Raymond K.; Yuan, Chujun; Yoon, Seonghun; Ge, Hui; Park, Ji Yeon; Tian, Bin; Stanley, William C.; Vatner, Stephen F.; Vatner, Dorothy E.; Yan, Lin
2013-01-01
Recently we described an ischemic preconditioning induced by repetitive coronary stenosis, which is induced by 6 episodes of non-lethal ischemia over 3 days, and which also resembles the hibernating myocardium phenotype. When compared with traditional second window of ischemic preconditioning using DNA microarrays, many genes which differed in the repetitive coronary stenosis appeared targeted to metabolism. Accordingly, the goal of this study was to provide a more in depth analysis of changes in metabolism in the different models of delayed preconditioning, i.e., second window and repetitive coronary stenosis. This was accomplished using a metabolomic approach based on liquid chromatography-mass spectrometry (LC-MS) and gas chromatography-mass spectrometry (GC-MS) techniques. Myocardial samples from the ischemic section of porcine hearts subjected to both models of late preconditioning were compared against sham controls. Interestingly, although both models involve delayed preconditioning, their metabolic signatures were radically different; of the total number of metabolites that changed in both models (135 metabolites) only 7 changed in both models, and significantly more, p<0.01, were altered in the repetitive coronary stenosis (40%) than in the second window (8.1%). The most significant changes observed were in energy metabolism, e.g., phosphocreatine was increased 4 fold and creatine kinase activity increased by 27.2%, a pattern opposite from heart failure, suggesting that the repetitive coronary stenosis and potentially hibernating myocardium have enhanced stress resistance capabilities. The improved energy metabolism could also be a key mechanism contributing to the cardioprotection observed in the repetitive coronary stenosis and in hibernating myocardium. PMID:23127662
Priming of the Cells: Hypoxic Preconditioning for Stem Cell Therapy.
Wei, Zheng Z; Zhu, Yan-Bing; Zhang, James Y; McCrary, Myles R; Wang, Song; Zhang, Yong-Bo; Yu, Shan-Ping; Wei, Ling
2017-10-05
Stem cell-based therapies are promising in regenerative medicine for protecting and repairing damaged brain tissues after injury or in the context of chronic diseases. Hypoxia can induce physiological and pathological responses. A hypoxic insult might act as a double-edged sword, it induces cell death and brain damage, but on the other hand, sublethal hypoxia can trigger an adaptation response called hypoxic preconditioning or hypoxic tolerance that is of immense importance for the survival of cells and tissues. This review was based on articles published in PubMed databases up to August 16, 2017, with the following keywords: "stem cells," "hypoxic preconditioning," "ischemic preconditioning," and "cell transplantation." Original articles and critical reviews on the topics were selected. Hypoxic preconditioning has been investigated as a primary endogenous protective mechanism and possible treatment against ischemic injuries. Many cellular and molecular mechanisms underlying the protective effects of hypoxic preconditioning have been identified. In cell transplantation therapy, hypoxic pretreatment of stem cells and neural progenitors markedly increases the survival and regenerative capabilities of these cells in the host environment, leading to enhanced therapeutic effects in various disease models. Regenerative treatments can mobilize endogenous stem cells for neurogenesis and angiogenesis in the adult brain. Furthermore, transplantation of stem cells/neural progenitors achieves therapeutic benefits via cell replacement and/or increased trophic support. Combinatorial approaches of cell-based therapy with additional strategies such as neuroprotective protocols, anti-inflammatory treatment, and rehabilitation therapy can significantly improve therapeutic benefits. In this review, we will discuss the recent progress regarding cell types and applications in regenerative medicine as well as future applications.
Review of Hydraulic Fracturing for Preconditioning in Cave Mining
NASA Astrophysics Data System (ADS)
He, Q.; Suorineni, F. T.; Oh, J.
2016-12-01
Hydraulic fracturing has been used in cave mining for preconditioning the orebody following its successful application in the oil and gas industries. In this paper, the state of the art of hydraulic fracturing as a preconditioning method in cave mining is presented. Procedures are provided on how to implement prescribed hydraulic fracturing by which effective preconditioning can be realized in any in situ stress condition. Preconditioning is effective in cave mining when an additional fracture set is introduced into the rock mass. Previous studies on cave mining hydraulic fracturing focused on field applications, hydraulic fracture growth measurement and the interaction between hydraulic fractures and natural fractures. The review in this paper reveals that the orientation of the current cave mining hydraulic fractures is dictated by and is perpendicular to the minimum in situ stress orientation. In some geotechnical conditions, these orientation-uncontrollable hydraulic fractures have limited preconditioning efficiency because they do not necessarily result in reduced fragmentation sizes and a blocky orebody through the introduction of an additional fracture set. This implies that if the minimum in situ stress orientation is vertical and favors the creation of horizontal hydraulic fractures, in a rock mass that is already dominated by horizontal joints, no additional fracture set is added to that rock mass to increase its blockiness to enable it cave. Therefore, two approaches that have the potential to create orientation-controllable hydraulic fractures in cave mining with the potential to introduce additional fracture set as desired are proposed to fill this gap. These approaches take advantage of directional hydraulic fracturing and the stress shadow effect, which can re-orientate the hydraulic fracture propagation trajectory against its theoretical predicted direction. Proppants are suggested to be introduced into the cave mining industry to enhance the
Using spectral subspaces to improve infrared spectroscopy prediction of soil properties
NASA Astrophysics Data System (ADS)
Sila, Andrew; Shepherd, Keith D.; Pokhariyal, Ganesh P.; Towett, Erick; Weullow, Elvis; Nyambura, Mercy K.
2015-04-01
We propose a method for improving soil property predictions using local calibration models trained on datasets in spectral subspaces rather that in a global space. Previous studies have shown that local calibrations based on a subset of spectra based on spectral similarity can improve model prediction performance where there is large population variance. Searching for relevant subspaces within a spectral collection to construct local models could result in models with high power and small prediction errors, but optimal methods for selection of local samples are not clear. Using a self-organizing map method (SOM) we obtained four mid-infrared subspaces for 1,907 soil sample spectra collected from 19 different countries by the Africa Soil Information Service. Subspace means for four sub-spaces and five selected soil properties were: pH, 6.0, 6.1, 6.0, 5.6; Mehlich-3 Al, 358, 974, 614, 1032 (mg/kg); Mehlich-3 Ca, 363, 1161, 526, 4276 (mg/kg); Total Carbon, 0.4, 1.1, 0.6, 2.3 (% by weight), and Clay (%), 16.8, 46.4, 27.7, 63.3. Spectral subspaces were also obtained using a cosine similarity method to calculate the angle between the entire sample spectra space and spectra of 10 pure soil minerals. We found the sample soil spectra to be similar to four pure minerals distributed as: Halloysite (n1=214), Illite (n2=743), Montmorillonite (n3=914) and Quartz (n4=32). Cross-validated partial least square regression models were developed using two-thirds of samples spectra from each subspace for the five soil properties.We evaluated prediction performance of the models using the root mean square error of prediction (RMSEP) for a one-third-holdout set. Local models significantly improved prediction performance compared with the global model. The SOM method reduced RMESP for total carbon by 10 % (global RMSEP = 0.41) Mehlich-3 Ca by 17% (global RMESP = 1880), Mehlich-3 Al by 21 % (global RMSEP = 206), and clay content by 6 % (global RMSEP = 13.6), but not for pH. Individual SOM
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite
Parikh, V; Singh, M
1998-05-01
This study was designed to investigate the effect of disodium cromoglycate (DSCG), a mast cell stabilizer, on cardioprotective effect of ischemic preconditioning. Isolated rat heart was subjected to 30 min of global ischemia followed by 30 min of reperfusion. Ischemic preconditioning was provided by four episodes of 5-min global ischemia followed by 5 min of reperfusion before sustained ischemia. Ischemic preconditioning and DSCG (10 and 100 microM) treatment markedly decreased the release of lactate dehydrogenase (LDH) and creatine kinase (CK) in coronary effluent and percentage incidence of ventricular premature beats (VPBs) and ventricular tachycardia/fibrillation (VT/VF) during reperfusion. Ischemic preconditioning and DSCG treatment also significantly reduced ischemia/reperfusion-induced mast cell peroxidase (MPO) release, a marker of mast cell degranulation. A significant increase in MPO release was observed immediately after ischemic preconditioning, and the release was found to be inhibited in hearts perfused with DSCG (10 and 100 microM) during ischemic preconditioning. DSCG administered during ischemic preconditioning (DSCG in ischemic preconditioning) attenuated the cardioprotective and antiarrhythmic effects of ischemic preconditioning. DSCG in ischemic preconditioning produced no marked effect on ischemia/reperfusion-induced MPO release. These findings tentatively suggest that DSCG administration during ischemic preconditioning abolishes its cardioprotective effect, perhaps by stabilizing resident cardiac mast cells.
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2017-06-01
Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.
On the convergence of (ensemble) Kalman filters and smoothers onto the unstable subspace
NASA Astrophysics Data System (ADS)
Bocquet, Marc
2016-04-01
The characteristics of the model dynamics are critical in the performance of (ensemble) Kalman filters and smoothers. In particular, as emphasised in the seminal work of Anna Trevisan and co-authors, the error covariance matrix is asymptotically supported by the unstable and neutral subspace only, i.e. it is span by the backward Lyapunov vectors with non-negative exponents. This behaviour is at the heart of algorithms known as Assimilation in the Unstable Subspace, although its formal proof was still missing. This convergence property, its analytic proof, meaning and implications for the design of efficient reduced-order data assimilation algorithms are the topics of this talk. The structure of the talk is as follows. Firstly, we provide the analytic proof of the convergence on the unstable and neutral subspace in the linear dynamics and linear observation operator case, along with rigorous results giving the rate of such convergence. The derivation is based on an expression that relates explicitly the covariance matrix at an arbitrary time with the initial error covariance. Numerical results are also shown to illustrate and support the mathematical claims. Secondly, we discuss how this neat picture is modified when the dynamics become nonlinear and chaotic and when it is not possible to derive analytic formulas. In this case an ensemble Kalman filter (EnKF) is used and the connection between the convergence properties on the unstable-neutral subspace and the EnKF covariance inflation is discussed. We also explain why, in the perfect model setting, the iterative ensemble Kalman smoother (IEnKS), as an efficient filtering and smoothing technique, has an error covariance matrix whose projection is more focused on the unstable-neutral subspace than that of the EnKF. This contribution results from collaborations with A. Carrassi, K. S. Gurumoorthy, A. Apte, C. Grudzien, and C. K. R. T. Jones.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
Strategies for Study of Neuroprotection from Cold-preconditioning
Mitchell, Heidi M.; White, David M.; Kraig, Richard P.
2010-01-01
Neurological injury is a frequent cause of morbidity and mortality from general anesthesia and related surgical procedures that could be alleviated by development of effective, easy to administer and safe preconditioning treatments. We seek to define the neural immune signaling responsible for cold-preconditioning as means to identify novel targets for therapeutics development to protect brain before injury onset. Low-level pro-inflammatory mediator signaling changes over time are essential for cold-preconditioning neuroprotection. This signaling is consistent with the basic tenets of physiological conditioning hormesis, which require that irritative stimuli reach a threshold magnitude with sufficient time for adaptation to the stimuli for protection to become evident. Accordingly, delineation of the immune signaling involved in cold-preconditioning neuroprotection requires that biological systems and experimental manipulations plus technical capacities are highly reproducible and sensitive. Our approach is to use hippocampal slice cultures as an in vitro model that closely reflects their in vivo counterparts with multi-synaptic neural networks influenced by mature and quiescent macroglia / microglia. This glial state is particularly important for microglia since they are the principal source of cytokines, which are operative in the femtomolar range. Also, slice cultures can be maintained in vitro for several weeks, which is sufficient time to evoke activating stimuli and assess adaptive responses. Finally, environmental conditions can be accurately controlled using slice cultures so that cytokine signaling of cold-preconditioning can be measured, mimicked, and modulated to dissect the critical node aspects. Cytokine signaling system analyses require the use of sensitive and reproducible multiplexed techniques. We use quantitative PCR for TNF-α to screen for microglial activation followed by quantitative real-time qPCR array screening to assess tissue
Strategies for study of neuroprotection from cold-preconditioning.
Mitchell, Heidi M; White, David M; Kraig, Richard P
2010-09-02
Neurological injury is a frequent cause of morbidity and mortality from general anesthesia and related surgical procedures that could be alleviated by development of effective, easy to administer and safe preconditioning treatments. We seek to define the neural immune signaling responsible for cold-preconditioning as means to identify novel targets for therapeutics development to protect brain before injury onset. Low-level pro-inflammatory mediator signaling changes over time are essential for cold-preconditioning neuroprotection. This signaling is consistent with the basic tenets of physiological conditioning hormesis, which require that irritative stimuli reach a threshold magnitude with sufficient time for adaptation to the stimuli for protection to become evident. Accordingly, delineation of the immune signaling involved in cold-preconditioning neuroprotection requires that biological systems and experimental manipulations plus technical capacities are highly reproducible and sensitive. Our approach is to use hippocampal slice cultures as an in vitro model that closely reflects their in vivo counterparts with multi-synaptic neural networks influenced by mature and quiescent macroglia/microglia. This glial state is particularly important for microglia since they are the principal source of cytokines, which are operative in the femtomolar range. Also, slice cultures can be maintained in vitro for several weeks, which is sufficient time to evoke activating stimuli and assess adaptive responses. Finally, environmental conditions can be accurately controlled using slice cultures so that cytokine signaling of cold-preconditioning can be measured, mimicked, and modulated to dissect the critical node aspects. Cytokine signaling system analyses require the use of sensitive and reproducible multiplexed techniques. We use quantitative PCR for TNF-α to screen for microglial activation followed by quantitative real-time qPCR array screening to assess tissue-wide cytokine
Remote Ischemic Preconditioning and Outcomes of Cardiac Surgery.
Hausenloy, Derek J; Candilio, Luciano; Evans, Richard; Ariti, Cono; Jenkins, David P; Kolvekar, Shyam; Knight, Rosemary; Kunst, Gudrun; Laing, Christopher; Nicholas, Jennifer; Pepper, John; Robertson, Steven; Xenou, Maria; Clayton, Tim; Yellon, Derek M
2015-10-08
Whether remote ischemic preconditioning (transient ischemia and reperfusion of the arm) can improve clinical outcomes in patients undergoing coronary-artery bypass graft (CABG) surgery is not known. We investigated this question in a randomized trial. We conducted a multicenter, sham-controlled trial involving adults at increased surgical risk who were undergoing on-pump CABG (with or without valve surgery) with blood cardioplegia. After anesthesia induction and before surgical incision, patients were randomly assigned to remote ischemic preconditioning (four 5-minute inflations and deflations of a standard blood-pressure cuff on the upper arm) or sham conditioning (control group). Anesthetic management and perioperative care were not standardized. The combined primary end point was death from cardiovascular causes, nonfatal myocardial infarction, coronary revascularization, or stroke, assessed 12 months after randomization. We enrolled a total of 1612 patients (811 in the control group and 801 in the ischemic-preconditioning group) at 30 cardiac surgery centers in the United Kingdom. There was no significant difference in the cumulative incidence of the primary end point at 12 months between the patients in the remote ischemic preconditioning group and those in the control group (212 patients [26.5%] and 225 patients [27.7%], respectively; hazard ratio with ischemic preconditioning, 0.95; 95% confidence interval, 0.79 to 1.15; P=0.58). Furthermore, there were no significant between-group differences in either adverse events or the secondary end points of perioperative myocardial injury (assessed on the basis of the area under the curve for the high-sensitivity assay of serum troponin T at 72 hours), inotrope score (calculated from the maximum dose of the individual inotropic agents administered in the first 3 days after surgery), acute kidney injury, duration of stay in the intensive care unit and hospital, distance on the 6-minute walk test, and quality of life
Carroll, C M; Carroll, S M; Overgoor, M L; Tobin, G; Barker, J H
1997-07-01
Ischemic preconditioning of the myocardium with repeated brief periods of ischemia and reperfusion prior to prolonged ischemia significantly reduces subsequent myocardial infarction. Following ischemic preconditioning, two "windows of opportunity" (early and late) exist, during which time prolonged ischemia can occur with reduced infarction size. The early window occurs at approximately 4 hours and the late window at 24 hours following ischemic preconditioning of the myocardium. We investigated if ischemic preconditioning of skeletal muscle prior to flap creation improved subsequent flap survival and perfusion immediately or 24 hours following ischemic preconditioning. Currently, no data exist on the utilization of ischemic preconditioning in this fashion. The animal model used was the latissimus dorsi muscle of adult male Sprague-Dawley rats. Animals were assigned to three groups, and the right or left latissimus dorsi muscle was chosen randomly in each animal. Group 1 (n = 12) was the control group, in which the entire latissimus dorsi muscle was elevated acutely without ischemic preconditioning. Group 2 (n = 8) investigated the effects of ischemic preconditioning in the early window. In this group, the latissimus dorsi muscle was elevated immediately following preconditioning. Group 3 (n = 8) investigated the effects of ischemic preconditioning in the late window, with elevation of the latissimus dorsi muscle 24 hours following ischemic preconditioning. The preconditioning regimen used in groups 2 and 3 was two 30-minute episodes of normothermic global ischemia with intervening 10-minute episodes of reperfusion. Latissimus dorsi muscle ischemia was created by occlusion of the thoracodorsal artery and vein and the intercostal perforators, after isolation of the muscle on these vessels. Muscle perfusion was assessed by a laser-Doppler perfusion imager. One week after flap elevation, muscle necrosis was quantified in all groups by means of computer-assisted digital
Four-dimensional ensemble variational data assimilation and the unstable subspace
NASA Astrophysics Data System (ADS)
Bocquet, Marc; Carrassi, Alberto
2017-04-01
The performance of (ensemble) Kalman filters used for data assimilation in the geosciences critically depends on the dynamical properties of the evolution model. A key aspect, emphasized in the seminal work of Anna Trevisan and co-authors, is that the error covariance matrix is asymptotically supported by the unstable-neutral subspace only, i.e. it is spanned by the backward Lyapunov vectors with non- negative exponents. The analytic proof of such a property for the Kalman filter error covariance has been recently given, and in particular that of its confinement to the unstable-neutral subspace. In this paper, we first generalize those results to the case of the Kalman smoother in a linear, Gaussian and perfect model scenario. We also provide square-root formulae for the filter and smoother that make the connection with ensemble formulations of the Kalman filter and smoother, where the span of the error covariance is described in terms of the ensemble deviations from the mean. We then discuss how this neat picture is modified when the dynamics are nonlinear and chaotic, and for which analytic results are precluded or difficult to obtain. A numerical investigation is carried out to study the approximate confinement of the anomalies for both a deterministic ensemble Kalman filter (EnKF) and a four-dimensional ensemble variational method, the iterative ensemble Kalman smoother (IEnKS), in a perfect model scenario. The confinement is characterized using geometrical angles that determine the relative position of the anomalies with respect to the unstable-neutral subspace. The alignment of the anomalies and of the unstable-neutral subspace is more pronounced when observation precision or frequency, as well as the data assimilation window length for the IEnKS, are increased. This leads to the increase of the data assimilation accuracy and shows that, under perfect model assumptions, spanning the full unstable-neutral subspace is sufficient to achieve satisfactorily
Molecular mechanisms of ischemic preconditioning in the kidney
Haase, Volker H.
2015-01-01
More effective therapeutic strategies for the prevention and treatment of acute kidney injury (AKI) are needed to improve the high morbidity and mortality associated with this frequently encountered clinical condition. Ischemic and/or hypoxic preconditioning attenuates susceptibility to ischemic injury, which results from both oxygen and nutrient deprivation and accounts for most cases of AKI. While multiple signaling pathways have been implicated in renoprotection, this review will focus on oxygen-regulated cellular and molecular responses that enhance the kidney's tolerance to ischemia and promote renal repair. Central mediators of cellular adaptation to hypoxia are hypoxia-inducible factors (HIFs). HIFs play a crucial role in ischemic/hypoxic preconditioning through the reprogramming of cellular energy metabolism, and by coordinating adenosine and nitric oxide signaling with antiapoptotic, oxidative stress, and immune responses. The therapeutic potential of HIF activation for the treatment and prevention of ischemic injuries will be critically examined in this review. PMID:26311114
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; ...
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum undermore » DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.« less
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; Dinner, Aaron R.
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
2015-01-01
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
HMC algorithm with multiple time scale integration and mass preconditioning
NASA Astrophysics Data System (ADS)
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Investigation of reperfusion injury and ischemic preconditioning in microsurgery.
Wang, Wei Zhong
2009-01-01
Ischemia/reperfusion (I/R) is inevitable in many vascular and musculoskeletal traumas, diseases, free tissue transfers, and during time-consuming reconstructive surgeries in the extremities. Salvage of a prolonged ischemic extremity or flap still remains a challenge for the microvascular surgeon. One of the common complications after microsurgery is I/R-induced tissue death or I/R injury. Twenty years after the discovery, ischemic preconditioning has emerged as a powerful method for attenuating I/R injury in a variety of organs or tissues. However, its therapeutic expectations still need to be fulfilled. In this article, the author reviews some important experimental evidences of I/R injury and preconditioning-induced protection in the fields relevant to microsurgery.
Preconditioned lattice-Boltzmann method for steady flows
NASA Astrophysics Data System (ADS)
Guo, Zhaoli; Zhao, T. S.; Shi, Yong
2004-12-01
In this paper we propose a preconditioned lattice Boltzmann (LB) method for steady incompressible flows. For steady flows, the macroscopic equations derived from this LB model are equivalent to those from the standard LB model, but with an improved eigenvalue system. The proposed model can be viewed as an explicit solver for preconditioned compressible Navier-Stokes equations. Linear stability analysis is performed and the results show that the stability of the model is the same as that of the standard LB model for low Mach numbers. The proposed model retains the structure of the standard LB model and, hence, possesses all the advantages. Numerical tests show that the convergence rate can be enhanced as much as an order of magnitude compared to the standard lattice Boltzmann method. The accuracy of the solutions is improved as well.
Fan, Ran; Yu, Tao; Lin, Jia-Li; Ren, Guang-Dong; Li, Yi; Liao, Xiao-Xing; Huang, Zi-Tong; Jiang, Chong-Hui
2016-10-01
In this study, we investigated the effects of remote ischemic preconditioning on post resuscitation cerebral function in a rat model of cardiac arrest and resuscitation. The animals were randomized into six groups: 1) sham operation, 2) lateral ventricle injection and sham operation, 3) cardiac arrest induced by ventricular fibrillation, 4) lateral ventricle injection and cardiac arrest, 5) remote ischemic preconditioning initiated 90min before induction of ventricular fibrillation, and 6) lateral ventricle injection and remote ischemic preconditioning before cardiac arrest. Reagent of Lateral ventricle injection is neuroglobin antisense oligodeoxynucleotides which initiated 24h before sham operation, cardiac arrest or remote ischemic preconditioning. Remote ischemic preconditioning was induced by four cycles of 5min of limb ischemia, followed by 5min of reperfusion. Ventricular fibrillation was induced by current and lasted for 6min. Defibrillation was attempted after 6min of cardiopulmonary resuscitation. The animals were then monitored for 2h and observed for an additionally maximum 70h. Post resuscitation cerebral function was evaluated by neurologic deficit score at 72h after return of spontaneous circulation. Results showed that remote ischemic preconditioning increased neurologic deficit scores. To investigate the neuroprotective effects of remote ischemic preconditioning, we observed neuronal injury at 48 and 72h after return of spontaneous circulation and found that remote ischemic preconditioning significantly decreased the occurrence of neuronal apoptosis and necrosis. To further comprehend mechanism of neuroprotection induced by remote ischemic preconditioning, we found expression of neuroglobin at 24h after return of spontaneous circulation was enhanced. Furthermore, administration of neuroglobin antisense oligodeoxynucleotides before induction of remote ischemic preconditioning showed that the level of neuroglobin was decreased then partly abrogated
Chemogenetic silencing of neurons in retrosplenial cortex disrupts sensory preconditioning.
Robinson, Siobhan; Todd, Travis P; Pasternak, Anna R; Luikart, Bryan W; Skelton, Patrick D; Urban, Daniel J; Bucci, David J
2014-08-13
An essential aspect of episodic memory is the formation of associations between neutral sensory cues in the environment. In light of recent evidence that this critical aspect of learning does not require the hippocampus, we tested the involvement of the retrosplenial cortex (RSC) in this process using a chemogenetic approach that allowed us to temporarily silence neurons along the entire rostrocaudal extent of the RSC. A viral vector containing the gene for a synthetic inhibitory G-protein-coupled receptor (hM4Di) was infused into RSC. When the receptor was later activated by systemic injection of clozapine-N-oxide, neural activity in RSC was transiently silenced (confirmed using a patch-clamp procedure). Rats expressing hM4Di and control rats were trained in a sensory preconditioning procedure in which a tone and light were paired on some trials and a white noise stimulus was presented alone on the other trials during the Preconditioning phase. Thus, rats were given the opportunity to form an association between a tone and a light in the absence of reinforcement. Later, the light was paired with food. During the test phase when the auditory cues were presented alone, controls exhibited more conditioned responding during presentation of the tone compared with the white noise reflecting the prior formation of a tone-light association. Silencing RSC neurons during the Preconditioning phase prevented the formation of an association between the tone and light and eliminated the sensory preconditioning effect. These findings indicate that RSC may contribute to episodic memory formation by linking essential sensory stimuli during learning. Copyright © 2014 the authors 0270-6474/14/3410982-07$15.00/0.
CaMeL: Learning Method Preconditions for HTN Planning
2006-01-01
CaMeL : Learning Method Preconditions for HTN Planning Okhtay Ilghami and Dana S. Nau Department of Computer Science University of Maryland College...algorithm, named CaMeL , based on this formalism. We present theo- retical results about CaMeL’s soundness, completeness, and convergence properties...We also report experimental results about its speed of convergence under different conditions. The experimental results suggest that CaMeL has the
Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)
1998-01-01
This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Exercise and Cardiac Preconditioning Against Ischemia Reperfusion Injury
Quindry, John C; Hamilton, Karyn L
2013-01-01
Cardiovascular disease (CVD), including ischemia reperfusion (IR) injury, remains a major cause of morbidity and mortality in industrialized nations. Ongoing research is aimed at uncovering therapeutic interventions against IR injury. Regular exercise participation is recognized as an important lifestyle intervention in the prevention and treatment of CVD and IR injury. More recent understanding reveals that moderate intensity aerobic exercise is also an important experimental model for understanding the cellular mechanisms of cardioprotection against IR injury. An important discovery in this regard was the observation that one-to-several days of exercise will attenuate IR injury. This phenomenon has been observed in young and old hearts of both sexes. Due to the short time course of exercise induced protection, IR injury prevention must be mediated by acute biochemical alterations within the myocardium. Research over the last decade reveals that redundant mechanisms account for exercise induced cardioprotection against IR. While much is now known about exercise preconditioning against IR injury, many questions remain. Perhaps most pressing, is what mechanisms mediate cardioprotection in aged hearts and what sex-dependent differences exist. Given that that exercise preconditioning is a polygenic effect, it is likely that multiple mediators of exercise induced cardioprotection have yet to be uncovered. Also unknown, is whether post translational modifications due to exercise are responsible for IR injury prevention. This review will provide an overview the major mechanisms of IR injury and exercise preconditioning. The discussion highlights many promising avenues for further research and describes how exercise preconditioning may continue to be an important scientific paradigm in the translation of cardioprotection research to the clinic. PMID:23909636
Object-oriented design of preconditioned iterative methods
Bruaset, A.M.
1994-12-31
In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.
Caffeine prevents protection in two human models of ischemic preconditioning.
Riksen, Niels P; Zhou, Zhigang; Oyen, Wim J G; Jaspers, Rogier; Ramakers, Bart P; Brouwer, Rene M H J; Boerman, Otto C; Steinmetz, Neil; Smits, Paul; Rongen, Gerard A
2006-08-15
We studied whether caffeine impairs protection by ischemic preconditioning (IP) in humans. Ischemic preconditioning is critically dependent on adenosine receptor stimulation. We hypothesize that the adenosine receptor antagonist caffeine blocks the protective effect of IP. In vivo ischemia-reperfusion injury was assessed in the thenar muscle by 99mTc-annexin A5 scintigraphy. Forty-two healthy volunteers performed forearm ischemic exercise. In 24 subjects, this was preceded by a stimulus for IP. In a randomized double-blinded design, the subjects received caffeine (4 mg/kg) or saline intravenously before the experiment. At reperfusion, 99mTc-annexin A5 was administered intravenously. Targeting of annexin was quantified by region-of-interest analysis, and expressed as percentage difference between experimental and contralateral hand. In vitro, we assessed recovery of contractile function of human atrial trabeculae, harvested during heart surgery, as functional end point of ischemia-reperfusion injury. Field-stimulated contraction was quantified at baseline and after simulated ischemia-reperfusion, in a paired approach with and without 5 min of IP, in the presence (n=13) or absence (n = 17) of caffeine (10 mg/l). Ischemic preconditioning reduced annexin targeting in the absence of caffeine (from 13 +/- 3% to 7 +/- 1% at 1 h, and from 19 +/- 2% to 9 +/- 3% at 4 h after reperfusion, p = 0.006), but not after caffeine administration (targeting 11 +/- 2% and 16 +/- 3% at 1 and 4 h). In vitro, IP improved post-ischemic functional recovery in the control group, but not in the caffeine group (8 +/- 3% vs. -8 +/- 5%, p=0.003). Caffeine abolishes IP in 2 human models at a dose equivalent to the drinking of 2 to 4 cups of coffee. (The Effect of Caffeine on Ischemic Preconditioning; http://clinicaltrials.gov/ct/show/NCT00184912?order=1; NCT00184912).
Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.
2013-01-01
This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688
Islet preconditioning via multimodal microfluidic modulation of intermittent hypoxia.
Lo, Joe F; Wang, Yong; Blake, Alexander; Yu, Gene; Harvat, Tricia A; Jeon, Hyojin; Oberholzer, Jose; Eddington, David T
2012-02-21
Simultaneous stimulation of ex vivo pancreatic islets with dynamic oxygen and glucose is a critical technique for studying how hypoxia alters glucose-stimulated response, especially in transplant environments. Standard techniques using a hypoxic chamber cannot provide both oxygen and glucose modulations, while monitoring stimulus-secretion coupling factors in real-time. Using novel microfluidic device with integrated glucose and oxygen modulations, we quantified hypoxic impairment of islet response by calcium influx, mitochondrial potentials, and insulin secretion. Glucose-induced calcium response magnitude and phase were suppressed by hypoxia, while mitochondrial hyperpolarization and insulin secretion decreased in coordination. More importantly, hypoxic response was improved by preconditioning islets to intermittent hypoxia (IH, 1 min/1 min 5-21% cycling for 1 h), translating to improved insulin secretion. Moreover, blocking mitochondrial K(ATP) channels removed preconditioning benefits of IH, similar to mechanisms in preconditioned cardiomyocytes. Additionally, the multimodal device can be applied to a variety of dynamic oxygen-metabolic studies in other ex vivo tissues.
Preconditioning reduces myocardial complement gene expression in vivo.
Tanhehco, E J; Yasojima, K; McGeer, P L; McGeer, E G; Lucchesi, B R
2000-09-01
This investigation examined the effect of preconditioning in an in vivo model of ischemia-reperfusion injury. Anesthetized New Zealand White rabbits underwent 30 min of regional myocardial ischemia followed by 2 h of reperfusion. Hearts preconditioned with two cycles of 5 min ischemia-10 min reperfusion (IPC) or with the ATP-sensitive K (K(ATP)) channel opener, diazoxide (10 mg/kg), exhibited significantly (P < 0.05) smaller infarcts compared with control. These treatments also significantly (P < 0.001 to P < 0.05) reduced C1q, C1r, C3, C8, and C9 mRNA in the areas at risk (AAR). The K(ATP) channel blocker 5-hydroxydecanoate (5-HD; 10 mg/kg) attenuated infarct size reduction elicited by IPC and diazoxide treatment. 5-HD partially reversed the decrease in complement expression caused by IPC but not diazoxide. There were no significant differences in complement gene expression in the nonrisk regions and livers of all groups. Western blot analysis revealed that IPC also reduced membrane attack complex expression in the AAR. The data demonstrate that preconditioning significantly decreases reperfusion-induced myocardial complement expression in vivo.
Thermal Preconditioning of MIMS Type K Thermocouples to Reduce Drift
NASA Astrophysics Data System (ADS)
Webster, E. S.
2017-01-01
Type K thermocouples are the most widely used temperature sensors in industry and are often used in the convenient mineral-insulated metal-sheathed (MIMS) format. The MIMS format provides almost total immunity to oxide-related drift in the 800°C-1000°C range. However, crystalline ordering of the atomic structure causes drift in the range 200°C-600°C. Troublesomely, the effects of this ordering are reversible, leading to hysteresis in some applications. Typically, MIMS cable is subjected to a post-manufacturing high-temperature recrystallization anneal to remove cold-work and place the thermocouple in a `known state.' However, variations in the temperatures and times of these exposures can lead to variations in the `as-received state.' This study gives guidelines on the best thermal preconditioning of 3 mm MIMS Type K thermocouples in order to minimize drift and achieve the most reproducible temperature measurements. Experimental results demonstrate the consequences of using Type K MIMS thermocouples in different states, including the as-received state, after a high-temperature recrystallization anneal and after preconditioning anneals at 200°C, 300°, 400°C and and 500°C. It is also shown that meaningful calibration is possible with the use of regular preconditioning anneals.
The evolving concept of physiological ischemia training vs. ischemia preconditioning.
Ni, Jun; Lu, Hongjian; Lu, Xiao; Jiang, Minghui; Peng, Qingyun; Ren, Caili; Xiang, Jie; Mei, Chengyao; Li, Jianan
2015-11-01
Ischemic heart diseases are the leading cause of death with increasing numbers of patients worldwide. Despite advances in revascularization techniques, angiogenic therapies remain highly attractive. Physiological ischemia training, which is first proposed in our laboratory, refers to reversible ischemia training of normal skeletal muscles by using a tourniquet or isometric contraction to cause physiologic ischemia for about 4 weeks for the sake of triggering molecular and cellular mechanisms to promote angiogenesis and formation of collateral vessels and protect remote ischemia areas. Physiological ischemia training therapy augments angiogenesis in the ischemic myocardium by inducing differential expression of proteins involved in energy metabolism, cell migration, protein folding, and generation. It upregulates the expressions of vascular endothelial growth factor, and induces angiogenesis, protects the myocardium when infarction occurs by increasing circulating endothelial progenitor cells and enhancing their migration, which is in accordance with physical training in heart disease rehabilitation. These findings may lead to a new approach of therapeutic angiogenesis for patients with ischemic heart diseases. On the basis of the promising results in animal studies, studies were also conducted in patients with coronary artery disease without any adverse effect in vivo, indicating that physiological ischemia training therapy is a safe, effective and non-invasive angiogenic approach for cardiovascular rehabilitation. Preconditioning is considered to be the most protective intervention against myocardial ischemia-reperfusion injury to date. Physiological ischemia training is different from preconditioning. This review summarizes the preclinical and clinical data of physiological ischemia training and its difference from preconditioning.
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Positive feedback induced memory effect in ischemic preconditioning.
Shi, Jichen; Xu, Jian; Zhang, Xiaorong; Yang, Ling
2012-05-07
The memory of ischemic preconditioning remains a great mystery. Brief preconditioning (several sequential regional ischemia/reperfusion in minutes) can induce a two-phase protection that lasts up to 3 days. Thus comes the so-called memory of preconditioning. This memory effect has been attributed to a feed-forward signaling cascade. But recent experimental observations suggest that intra-mitochondrial positive feedback may be responsible for sustaining the protective effect. The link between positive feedback and memory is yet to be determined. In this study, we used a mathematical model to describe the way in which positive feedback induces memory in the first window of cardioprotection, and we derived an explicit relationship between the memory duration and the strength of the positive feedback. Our major findings are: (1) that positive feedback relying on a hysteresis response provides an effective way of prolonging protection up to any length; and (2) that the stronger the positive feedback, the longer the memory duration. Furthermore, compared with the feed-forward signaling cascade, positive feedback may be more favored by natural systems because of its robustness and high efficiency. The mechanisms described in this study have important implications for developments of experimental approaches as well as therapeutic strategies. Copyright Â© 2012 Elsevier Ltd. All rights reserved.
Financial preconditions for successful community initiatives for the uninsured.
Song, Paula H; Smith, Dean G
2007-01-01
Community-based initiatives are increasingly being implemented as a strategy to address the health needs of the community, with a growing body of evidence on successes of various initiatives. This study addresses financial status indicators (preconditions) that might predict where community-based initiatives might have a better chance for success. We evaluated five community-based initiatives funded by the Communities in Charge (CIC) program sponsored by the Robert Wood Johnson Foundation. These initiatives focus on increasing access by easing financial barriers to care for the uninsured. At each site, we collected information on financial status indicators and interviewed key personnel from health services delivery and financing organizations. With full acknowledgment of the caveats associated with generalizations based on a small number of observations, we suggest four financial preconditions associated with successful initiation of CIC programs: (1) uncompensated care levels that negatively affect profitability, (2) reasonable financial stability of providers, (3) stable health insurance market, and (4) the potential to create new sources of funding. In general, sites that demonstrate successful program initiation are financially stressed enough by uncompensated care to gain the attention of local healthcare providers. However, they are not so strained and so concerned about revenue sources that they cannot afford to participate in the initiative. In addition to political and managerial indicators, we suggest that planning for community-based initiatives should include financial indicators of current health services delivery and financing organizations and consideration of whether they meet preconditions for success.
NASA Astrophysics Data System (ADS)
Mevel, L.; Basseville, M.; Goursat, M.
2003-01-01
Numerical results from the application of new stochastic subspace-based structural identification and damage detection methods to the steel-quake structure are discussed. Particular emphasis is put on structural model identification, for which we display some modeshapes.
NASA Astrophysics Data System (ADS)
Chen, Dan; Guo, Lin-yuan; Wang, Chen-hao; Ke, Xi-zheng
2017-07-01
Equalization can compensate channel distortion caused by channel multipath effects, and effectively improve convergent of modulation constellation diagram in optical wireless system. In this paper, the subspace blind equalization algorithm is used to preprocess M-ary phase shift keying (MPSK) subcarrier modulation signal in receiver. Mountain clustering is adopted to get the clustering centers of MPSK modulation constellation diagram, and the modulation order is automatically identified through the k-nearest neighbor (KNN) classifier. The experiment has been done under four different weather conditions. Experimental results show that the convergent of constellation diagram is improved effectively after using the subspace blind equalization algorithm, which means that the accuracy of modulation recognition is increased. The correct recognition rate of 16PSK can be up to 85% in any kind of weather condition which is mentioned in paper. Meanwhile, the correct recognition rate is the highest in cloudy and the lowest in heavy rain condition.
NASA Astrophysics Data System (ADS)
Barnum, Howard; Ortiz, Gerardo; Somma, Rolando; Viola, Lorenza
2005-12-01
We define what it means for a state in a convex cone of states on a space of observables to be generalized-entangled relative to a subspace of the observables, in a general ordered linear spaces framework for operational theories. This extends the notion of ordinary entanglement in quantum information theory to a much more general framework. Some important special cases are described, in which the distinguished observables are subspaces of the observables of a quantum system, leading to results like the identification of generalized unentangled states with Lie-group-theoretic coherent states when the special observables form an irreducibly represented Lie algebra. Some open problems, including that of generalizing the semigroup of local operations with classical communication to the convex cones setting, are discussed.
Active subspace approach to reliability and safety assessments of small satellite separation
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Chen, Xiaoqian; Zhao, Yong; Tuo, Zhouhui; Yao, Wen
2017-02-01
Ever-increasing launch of small satellites demands an effective and efficient computer-aided analysis approach to shorten the ground test cycle and save the economic cost. However, the multiple influencing factors hamper the efficiency and accuracy of separation reliability assessment. In this study, a novel evaluation approach based on active subspace identification and response surface construction is established and verified. The formulation of small satellite separation is firstly derived, including equations of motion, separation and gravity forces, and quantity of interest. The active subspace reduces the dimension of uncertain inputs with minimum precision loss and a 4th degree multivariate polynomial regression (MPR) using cross validation is hand-coded for the propagation and error analysis. A common spring separation of small satellites is employed to demonstrate the accuracy and efficiency of the approach, which exhibits its potential use in widely existing needs of satellite separation analysis.
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Arvind, Dorai, Kavita
2017-05-01
We experimentally demonstrate the efficacy of a three-layer nested Uhrig dynamical decoupling (NUDD) sequence to preserve arbitrary quantum states in a two-dimensional subspace of the four-dimensional two-qubit Hilbert space on a nuclear magnetic resonance quantum information processor. The effect of the state preservation is studied first on four known states, including two product states and two maximally entangled Bell states. Next, to evaluate the preservation capacity of the NUDD scheme, we apply it to eight randomly generated states in the subspace. Although, the preservation of different states varies, the scheme, on the average, performs very well. The complete tomographs of the states at different time points are used to compute fidelity. State fidelities using NUDD protection are compared with those obtained without using any protection. The nested pulse schemes are complex in nature and require careful experimental implementation.
NASA Astrophysics Data System (ADS)
Zhang, Yungang; Zhang, Bailing; Lu, Wenjin
2011-06-01
Histological image is important for diagnosis of breast cancer. In this paper, we present a novel automatic breaset cancer classification scheme based on histological images. The image features are extracted using the Curvelet Transform, statistics of Gray Level Co-occurence Matrix (GLCM) and Completed Local Binary Patterns (CLBP), respectively. The three different features are combined together and used for classification. A classifier ensemble approach, called Random Subspace Ensemble (RSE), are used to select and aggregate a set of base neural network classifiers for classification. The proposed multiple features and random subspace ensemble offer the classification rate 95.22% on a publically available breast cancer image dataset, which compares favorably with the previously published result 93.4%.
Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao
2016-11-25
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.
Experimental creation of quantum Zeno subspaces by repeated multi-spin projections in diamond
Kalb, N.; Cramer, J.; Twitchen, D. J.; Markham, M.; Hanson, R.; Taminiau, T. H.
2016-01-01
Repeated observations inhibit the coherent evolution of quantum states through the quantum Zeno effect. In multi-qubit systems this effect provides opportunities to control complex quantum states. Here, we experimentally demonstrate that repeatedly projecting joint observables of multiple spins creates quantum Zeno subspaces and simultaneously suppresses the dephasing caused by a quasi-static environment. We encode up to two logical qubits in these subspaces and show that the enhancement of the dephasing time with increasing number of projections follows a scaling law that is independent of the number of spins involved. These results provide experimental insight into the interplay between frequent multi-spin measurements and slowly varying noise and pave the way for tailoring the dynamics of multi-qubit systems through repeated projections. PMID:27713397
Entanglement properties of positive operators with ranges in completely entangled subspaces
NASA Astrophysics Data System (ADS)
Sengupta, R.; Arvind, Singh, Ajit Iqbal
2014-12-01
We prove that the projection on a completely entangled subspace S of maximum dimension obtained by Parthasarathy [K. R. Parthasarathy, Proc. Indian Acad. Sci. Math. Sci. 114, 365 (2004), 10.1007/BF02829441] in a multipartite quantum system is not positive under partial transpose. We next show that a large number of positive operators with a range in S also have the same property. In this process we construct an orthonormal basis for S and provide a theorem to link the constructions of completely entangled subspaces due to Parthasarathy (as cited above), Bhat [B. V. R. Bhat, Int. J. Quantum Inf. 4, 325 (2006), 10.1142/S0219749906001797], and Johnston [N. Johnston, Phys. Rev. A 87, 064302 (2013), 10.1103/PhysRevA.87.064302].
Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao
2016-01-01
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234
Application of Earthquake Subspace Detectors at Kilauea and Mauna Loa Volcanoes, Hawai`i
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Yeck, W.
2016-12-01
Recent studies have demonstrated the capabilities of earthquake subspace detectors for detailed cataloging and tracking of seismicity in a number of regions and settings. We are exploring the application of subspace detectors at the United States Geological Survey's Hawaiian Volcano Observatory (HVO) to analyze seismicity at Kilauea and Mauna Loa volcanoes. Elevated levels of microseismicity and occasional swarms of earthquakes associated with active volcanism here present cataloging challenges due the sheer numbers of earthquakes and an intrinsically low signal-to-noise environment featuring oceanic microseism and volcanic tremor in the ambient seismic background. With high-quality continuous recording of seismic data at HVO, we apply subspace detectors (Harris and Dodge, 2011, Bull. Seismol. Soc. Am., doi: 10.1785/0120100103) during intervals of noteworthy seismicity. Waveform templates are drawn from Magnitude 2 and larger earthquakes within clusters of earthquakes cataloged in the HVO seismic database. At Kilauea, we focus on seismic swarms in the summit caldera region where, despite continuing eruptions from vents in the summit region and in the east rift zone, geodetic measurements reflect a relatively inflated volcanic state. We also focus on seismicity beneath and adjacent to Mauna Loa's summit caldera that appears to be associated with geodetic expressions of gradual volcanic inflation, and where precursory seismicity clustered prior to both Mauna Loa's most recent eruptions in 1975 and 1984. We recover several times more earthquakes with the subspace detectors - down to roughly 2 magnitude units below the templates, based on relative amplitudes - compared to the numbers of cataloged earthquakes. The increased numbers of detected earthquakes in these clusters, and the ability to associate and locate them, allow us to infer details of the spatial and temporal distributions and possible variations in stresses within these key regions of the volcanoes.
Immunity of information encoded in decoherence-free subspaces to particle loss
NASA Astrophysics Data System (ADS)
Migdał, Piotr; Banaszek, Konrad
2011-11-01
We demonstrate that for an ensemble of qudits, subjected to collective decoherence in the form of perfectly correlated random SU(d) unitaries, quantum superpositions stored in the decoherence-free subspace are fully immune against the removal of one particle. This provides a feasible scheme to protect quantum information encoded in the polarization state of a sequence of photons against both collective depolarization and one-photon loss and can be demonstrated with photon quadruplets using currently available technology.
NASA Astrophysics Data System (ADS)
Sahadevan, R.; Prakash, P.
2017-01-01
We show how invariant subspace method can be extended to time fractional coupled nonlinear partial differential equations and construct their exact solutions. Effectiveness of the method has been illustrated through time fractional Hunter-Saxton equation, time fractional coupled nonlinear diffusion system, time fractional coupled Boussinesq equation and time fractional Whitman-Broer-Kaup system. Also we explain how maximal dimension of the time fractional coupled nonlinear partial differential equations can be estimated.
NASA Astrophysics Data System (ADS)
Hsu, Wei-Ting; Loh, Chin-Hsiung; Chao, Shu-Hsien
2015-03-01
Stochastic subspace identification method (SSI) has been proven to be an efficient algorithm for the identification of liner-time-invariant system using multivariate measurements. Generally, the estimated modal parameters through SSI may be afflicted with statistical uncertainty, e.g. undefined measurement noises, non-stationary excitation, finite number of data samples etc. Therefore, the identified results are subjected to variance errors. Accordingly, the concept of the stabilization diagram can help users to identify the correct model, i.e. through removing the spurious modes. Modal parameters are estimated at successive model orders where the physical modes of the system are extracted and separated from the spurious modes. Besides, an uncertainty computation scheme was derived for the calculation of uncertainty bounds for modal parameters at some given model order. The uncertainty bounds of damping ratios are particularly interesting, as the estimation of damping ratios are difficult to obtain. In this paper, an automated stochastic subspace identification algorithm is addressed. First, the identification of modal parameters through covariance-driven stochastic subspace identification from the output-only measurements is used for discussion. A systematic way of investigation on the criteria for the stabilization diagram is presented. Secondly, an automated algorithm of post-processing on stabilization diagram is demonstrated. Finally, the computation of uncertainty bounds for each mode with all model order in the stabilization diagram is utilized to determine system natural frequencies and damping ratios. Demonstration of this study on the system identification of a three-span steel bridge under operation condition is presented. It is shown that the proposed new operation procedure for the automated covariance-driven stochastic subspace identification can enhance the robustness and reliability in structural health monitoring.
Transfer and teleportation of quantum states encoded in decoherence-free subspace
Wei Hua; Deng Zhijao; Zhang Xiaolong; Feng Mang
2007-11-15
Quantum state transfer and teleportation, with qubits encoded in internal states of atoms in cavities, among spatially separated nodes of a quantum network in a decoherence-free subspace are proposed, based on a cavity-assisted interaction with single-photon pulses. We show in detail the implementation of a logic-qubit Hadamard gate and a two-logic-qubit conditional gate, and discuss the experimental feasibility of our scheme.
Asymptotic and Bootstrap Tests for the Dimension of the Non-Gaussian Subspace
NASA Astrophysics Data System (ADS)
Nordhausen, Klaus; Oja, Hannu; Tyler, David E.; Virta, Joni
2017-06-01
Dimension reduction is often a preliminary step in the analysis of large data sets. The so-called non-Gaussian component analysis searches for a projection onto the non-Gaussian part of the data, and it is then important to know the correct dimension of the non-Gaussian signal subspace. In this paper we develop asymptotic as well as bootstrap tests for the dimension based on the popular fourth order blind identification (FOBI) method.
2007-11-02
DESE : another decimative subspace-based parameter estimation algorithm, recently proposed as Decimative Spectral Estimation [3]. In what follows...Sponsoring/Monitoring Agency Name(s) and Address(es) US Army Research , Development & Standardization Group (UK) PSC 802 Box 15 FPO AE 09499-1500 Sponsor...1DXstackX∗stack. (12) C. DESE This algorithm was presented very recently [3]. Like HTLS, DESE also makes use of the SVD of a Hankel ma- trix and the