Preconditioned Krylov subspace methods for eigenvalue problems
Wu, Kesheng; Saad, Y.; Stathopoulos, A.
1996-12-31
Lanczos algorithm is a commonly used method for finding a few extreme eigenvalues of symmetric matrices. It is effective if the wanted eigenvalues have large relative separations. If separations are small, several alternatives are often used, including the shift-invert Lanczos method, the preconditioned Lanczos method, and Davidson method. The shift-invert Lanczos method requires direct factorization of the matrix, which is often impractical if the matrix is large. In these cases preconditioned schemes are preferred. Many applications require solution of hundreds or thousands of eigenvalues of large sparse matrices, which pose serious challenges for both iterative eigenvalue solver and preconditioner. In this paper we will explore several preconditioned eigenvalue solvers and identify the ones suited for finding large number of eigenvalues. Methods discussed in this paper make up the core of a preconditioned eigenvalue toolkit under construction.
Preserving Symmetry in Preconditioned Krylov Subspace Methods
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Chow, E.; Saad, Y.; Yeung, M. C.
1996-01-01
We consider the problem of solving a linear system Ax = b when A is nearly symmetric and when the system is preconditioned by a symmetric positive definite matrix M. In the symmetric case, one can recover symmetry by using M-inner products in the conjugate gradient (CG) algorithm. This idea can also be used in the nonsymmetric case, and near symmetry can be preserved similarly. Like CG, the new algorithms are mathematically equivalent to split preconditioning, but do not require M to be factored. Better robustness in a specific sense can also be observed. When combined with truncated versions of iterative methods, tests show that this is more effective than the common practice of forfeiting near-symmetry altogether.
Krylov subspace methods on supercomputers
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Krylov-subspace acceleration of time periodic waveform relaxation
Lumsdaine, A.
1994-12-31
In this paper the author uses Krylov-subspace techniques to accelerate the convergence of waveform relaxation applied to solving systems of first order time periodic ordinary differential equations. He considers the problem in the frequency domain and presents frequency dependent waveform GMRES (FDWGMRES), a member of a new class of frequency dependent Krylov-subspace techniques. FDWGMRES exhibits many desirable properties, including finite termination independent of the number of timesteps and, for certain problems, a convergence rate which is bounded from above by the convergence rate of GMRES applied to the static matrix problem corresponding to the linear time-invariant ODE.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
NASA Astrophysics Data System (ADS)
Gatsis, John
An investigation of preconditioning techniques is presented for a Newton-Krylov algorithm that is used for the computation of steady, compressible, high Reynolds number flows about airfoils. A second-order centred-difference method is used to discretize the compressible Navier-Stokes (NS) equations that govern the fluid flow. The one-equation Spalart-Allmaras turbulence model is used. The discretized equations are solved using Newton's method and the generalized minimal residual (GMRES) Krylov subspace method is used to approximately solve the linear system. These preconditioning techniques are first applied to the solution of the discretized steady convection-diffusion equation. Various orderings, iterative block incomplete LU (BILU) preconditioning and multigrid preconditioning are explored. The baseline preconditioner is a BILU factorization of a lower-order discretization of the system matrix in the Newton linearization. An ordering based on the minimum discarded fill (MDF) ordering is developed and compared to the widely popular reverse Cuthill-McKee ordering. An evolutionary algorithm is used to investigate and enhance this ordering. For the convection-diffusion equation, the MDF-based ordering performs well and RCM is superior for the NS equations. Experiments for inviscid, laminar and turbulent cases are presented to show the effectiveness of iterative BILU preconditioning in terms of reducing the number of GMRES iterations, and hence the memory requirements of the Newton-Krylov algorithm. Multigrid preconditioning also reduces the number of GMRES iterations. The framework for the iterative BILU and BILU-smoothed multigrid preconditioning algorithms is presented in detail.
Application of Block Krylov Subspace Spectral Methods to Maxwell's Equations
Lambers, James V.
2009-10-08
Ever since its introduction by Kane Yee over forty years ago, the finite-difference time-domain (FDTD) method has been a widely-used technique for solving the time-dependent Maxwell's equations. This paper presents an alternative approach to these equations in the case of spatially-varying electric permittivity and/or magnetic permeability, based on Krylov subspace spectral (KSS) methods. These methods have previously been applied to the variable-coefficient heat equation and wave equation, and have demonstrated high-order accuracy, as well as stability characteristic of implicit time-stepping schemes, even though KSS methods are explicit. KSS methods for scalar equations compute each Fourier coefficient of the solution using techniques developed by Gene Golub and Gerard Meurant for approximating elements of functions of matrices by Gaussian quadrature in the spectral, rather than physical, domain. We show how they can be generalized to coupled systems of equations, such as Maxwell's equations, by choosing appropriate basis functions that, while induced by this coupling, still allow efficient and robust computation of the Fourier coefficients of each spatial component of the electric and magnetic fields. We also discuss the implementation of appropriate boundary conditions for simulation on infinite computational domains, and how discontinuous coefficients can be handled.
Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers
Pernice, M.
1994-12-31
Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.
Druskin, V.; Lee, Ping; Knizhnerman, L.
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we propose specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.
Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method
Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray
2016-01-01
This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less
A subspace preconditioning algorithm for eigenvector/eigenvalue computation
Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.
1996-12-31
We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.
A General Algorithm for Reusing Krylov Subspace Information. I. Unsteady Navier-Stokes
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Vuik, C.; Lucas, Peter; vanGijzen, Martin; Bijl, Hester
2010-01-01
A general algorithm is developed that reuses available information to accelerate the iterative convergence of linear systems with multiple right-hand sides A x = b (sup i), which are commonly encountered in steady or unsteady simulations of nonlinear equations. The algorithm is based on the classical GMRES algorithm with eigenvector enrichment but also includes a Galerkin projection preprocessing step and several novel Krylov subspace reuse strategies. The new approach is applied to a set of test problems, including an unsteady turbulent airfoil, and is shown in some cases to provide significant improvement in computational efficiency relative to baseline approaches.
Druskin, V.; Knizhnerman, L.
1994-12-31
The authors solve the Cauchy problem for an ODE system Au + {partial_derivative}u/{partial_derivative}t = 0, u{vert_bar}{sub t=0} = {var_phi}, where A is a square real nonnegative definite symmetric matrix of the order N, {var_phi} is a vector from R{sup N}. The stiffness matrix A is obtained due to semi-discretization of a parabolic equation or system with time-independent coefficients. The authors are particularly interested in large stiff 3-D problems for the scalar diffusion and vectorial Maxwell`s equations. First they consider an explicit method in which the solution on a whole time interval is projected on a Krylov subspace originated by A. Then they suggest another Krylov subspace with better approximating properties using powers of an implicit transition operator. These Krylov subspace methods generate optimal in a spectral sense polynomial approximations for the solution of the ODE, similar to CG for SLE.
A new Krylov-subspace method for symmetric indefinite linear systems
Freund, R.W.; Nachtigal, N.M.
1994-10-01
Many important applications involve the solution of large linear systems with symmetric, but indefinite coefficient matrices. For example, such systems arise in incompressible flow computations and as subproblems in optimization algorithms for linear and nonlinear programs. Existing Krylov-subspace iterations for symmetric indefinite systems, such as SYMMLQ and MINRES, require the use of symmetric positive definite preconditioners, which is a rather unnatural restriction when the matrix itself is highly indefinite with both many positive and many negative eigenvalues. In this note, the authors describe a new Krylov-subspace iteration for solving symmetric indefinite linear systems that can be combined with arbitrary symmetric preconditioners. The algorithm can be interpreted as a special case of the quasi-minimal residual method for general non-Hermitian linear systems, and like the latter, it produces iterates defined by a quasi-minimal residual property. The proposed method has the same work and storage requirements per iteration as SYMMLQ or MINRES, however, it usually converges in considerably fewer iterations. Results of numerical experiments are reported.
Krylov subspace iterative methods for boundary element method based near-field acoustic holography.
Valdivia, Nicolas; Williams, Earl G
2005-02-01
The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Chen, G.; Chacón, L.; Leibs, C.A.; Knoll, D.A.; Taitano, W.
2014-02-01
A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solve is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.
NASA Astrophysics Data System (ADS)
Singer, B. Sh.
2008-12-01
The paper presents a new code for modelling electromagnetic fields in complicated 3-D environments and provides examples of the code application. The code is based on an integral equation (IE) for the scattered electromagnetic field, presented in the form used by the Modified Iterative Dissipative Method (MIDM). This IE possesses contraction properties that allow it to be solved iteratively. As a result, for an arbitrary earth model and any source of the electromagnetic field, the sequence of approximations converges to the solution at any frequency. The system of linear equations that represents a finite-dimensional counterpart of the continuous IE is derived using a projection definition of the system matrix. According to this definition, the matrix is calculated by integrating the Green's function over the `source' and `receiver' cells of the numerical grid. Such a system preserves contraction properties of the continuous equation and can be solved using the same iterative technique. The condition number of the system matrix and, therefore, the convergence rate depends only on the physical properties of the model under consideration. In particular, these parameters remain independent of the numerical grid used for numerical simulation. Applied to the system of linear equations, the iterative perturbation approach generates a sequence of approximations, converging to the solution. The number of iterations is significantly reduced by finding the best possible approximant inside the Krylov subspace, which spans either all accumulated iterates or, if it is necessary to save the memory, only a limited number of the latest iterates. Optimization significantly reduces the number of iterates and weakens its dependence on the lateral contrast of the model. Unlike more traditional conjugate gradient approaches, the iterations are terminated when the approximate solution reaches the requested relative accuracy. The number of the required iterates, which for simple
NASA Astrophysics Data System (ADS)
Saadat, Amir; Khomami, Bamin
2014-05-01
Excluded volume and hydrodynamic interactions play a central role in macromolecular dynamics under equilibrium and non-equilibrium settings. The high computational cost of incorporating the influence of hydrodynamic interaction in meso-scale simulation of polymer dynamics has motivated much research on development of high fidelity and cost efficient techniques. Among them, the Chebyshev polynomial based techniques and the Krylov subspace methods are most promising. To this end, in this study we have developed a series of semi-implicit predictor-corrector Brownian dynamics algorithms for bead-spring chain micromechanical model of polymers that utilizes either the Chebyshev or the Krylov framework. The efficiency and fidelity of these new algorithms in equilibrium (radius of gyration and diffusivity) and non-equilibrium conditions (transient planar extensional flow) are demonstrated with particular emphasis on the new enhancements of the Chebyshev polynomial and the Krylov subspace methods. In turn, the algorithm with the highest efficiency and fidelity, namely, the Krylov subspace method, is used to simulate dilute solutions of high molecular weight polystyrene in uniaxial extensional flow. Finally, it is demonstrated that the bead-spring Brownian dynamics simulation with appropriate inclusion of excluded volume and hydrodynamic interactions can quantitatively predict the observed extensional hardening of polystyrene dilute solutions over a broad molecular weight range.
2012-03-22
n as is defined in problem (1), and our method is based on accelerating the simple subspace (or simultaneous) iteration (SSI) method via solving... problem (2) in a chosen subspace at each iteration . 2.1 Motivation and Framework Starting from an initial point X(0) ∈ Rm×k, SSI computes the next iterate ...J. Sci. Comput., 29 (2007), pp. 1854–1875. [29] X. YUAN AND J. YANG, Sparse and low-rank matrix decomposition via alternating direction methods
Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods
NASA Astrophysics Data System (ADS)
Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.
2013-04-01
In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.
Luanjing Guo; Chuan Lu; Hai Huang; Derek R. Gaston
2012-06-01
Systems of multicomponent reactive transport in porous media that are large, highly nonlinear, and tightly coupled due to complex nonlinear reactions and strong solution-media interactions are often described by a system of coupled nonlinear partial differential algebraic equations (PDAEs). A preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach is applied to solve the PDAEs in a fully coupled, fully implicit manner. The advantage of the JFNK method is that it avoids explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations for computational efficiency considerations. This solution approach is also enhanced by physics-based blocking preconditioning and multigrid algorithm for efficient inversion of preconditioners. Based on the solution approach, we have developed a reactive transport simulator named RAT. Numerical results are presented to demonstrate the efficiency and massive scalability of the simulator for reactive transport problems involving strong solution-mineral interactions and fast kinetics. It has been applied to study the highly nonlinearly coupled reactive transport system of a promising in situ environmental remediation that involves urea hydrolysis and calcium carbonate precipitation.
HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.
Luanjing Guo; Hai Huang; Derek Gaston; Cody Permann; David Andrs; George Redden; Chuan Lu; Don Fox; Yoshiko Fujita
2013-03-01
Modeling large multicomponent reactive transport systems in porous media is particularly challenging when the governing partial differential algebraic equations (PDAEs) are highly nonlinear and tightly coupled due to complex nonlinear reactions and strong solution-media interactions. Here we present a preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach to solve the governing PDAEs in a fully coupled and fully implicit manner. A well-known advantage of the JFNK method is that it does not require explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations. Our approach further enhances the JFNK method by utilizing physics-based, block preconditioning and a multigrid algorithm for efficient inversion of the preconditioner. This preconditioning strategy accounts for self- and optionally, cross-coupling between primary variables using diagonal and off-diagonal blocks of an approximate Jacobian, respectively. Numerical results are presented demonstrating the efficiency and massive scalability of the solution strategy for reactive transport problems involving strong solution-mineral interactions and fast kinetics. We found that the physics-based, block preconditioner significantly decreases the number of linear iterations, directly reducing computational cost; and the strongly scalable algebraic multigrid algorithm for approximate inversion of the preconditioner leads to excellent parallel scaling performance.
NASA Astrophysics Data System (ADS)
Viallet, M.; Goffrey, T.; Baraffe, I.; Folini, D.; Geroux, C.; Popov, M. V.; Pratt, J.; Walder, R.
2016-02-01
This work is a continuation of our efforts to develop an efficient implicit solver for multidimensional hydrodynamics for the purpose of studying important physical processes in stellar interiors, such as turbulent convection and overshooting. We present an implicit solver that results from the combination of a Jacobian-free Newton-Krylov method and a preconditioning technique tailored to the inviscid, compressible equations of stellar hydrodynamics. We assess the accuracy and performance of the solver for both 2D and 3D problems for Mach numbers down to 10-6. Although our applications concern flows in stellar interiors, the method can be applied to general advection and/or diffusion-dominated flows. The method presented in this paper opens up new avenues in 3D modeling of realistic stellar interiors allowing the study of important problems in stellar structure and evolution.
Starke, G.
1994-12-31
For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
A Newton-Krylov Approach to Aerodynamic Shape Optimization in Three Dimensions
NASA Astrophysics Data System (ADS)
Leung, Timothy Man-Ming
A Newton-Krylov algorithm is presented for aerodynamic shape optimization in three dimensions using the Euler equations. An inexact-Newton method is used in the flow solver, a discrete-adjoint method to compute the gradient, and the quasi-Newton optimizer to find the optimum. A Krylov subspace method with approximate-Schur preconditioning is used to solve both the flow equation and the adjoint equation. Basis spline surfaces are used to parameterize the geometry, and a fast algebraic algorithm is used for grid movement. Accurate discrete- adjoint gradients can be obtained in approximately one-fourth the time required for a converged flow solution. Single- and multi-point lift-constrained drag minimization optimization cases are presented for wing design at transonic speeds. In all cases, the optimizer is able to efficiently decrease the objective function and gradient for problems with hundreds of design variables.
Combined incomplete LU and strongly implicit procedure preconditioning
Meese, E.A.
1996-12-31
For the solution of large sparse linear systems of equations, the Krylov-subspace methods have gained great merit. Their efficiency are, however, largely dependent upon preconditioning of the equation-system. A family of matrix factorisations often used for preconditioning, is obtained from a truncated Gaussian elimination, ILU(p). Less common, supposedly due to it`s restriction to certain sparsity patterns, is factorisations generated by the strongly implicit procedure (SIP). The ideas from ILU(p) and SIP are used in this paper to construct a generalized strongly implicit procedure, applicable to matrices with any sparsity pattern. The new algorithm has been run on some test equations, and efficiency improvements over ILU(p) was found.
Minimal Krylov Subspaces for Dimension Reduction
2013-01-01
Berry, Z. Drmac, and E.R. Jessup . Matrices, vector spaces, and information retrieval. SIAM review, 41(2):335–362, 1999. [8] M.W. Berry, S.T. Dumais... Information Processing Systems 14: Proceeding of the 2001 Conference, pages 849–856. [55] C.C. Paige. The computation of eigenvalues and eigenvectors of very... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Krylov subspace methods for the Dirac equation
NASA Astrophysics Data System (ADS)
Beerwerth, Randolf; Bauke, Heiko
2015-03-01
The Lanczos algorithm is evaluated for solving the time-independent as well as the time-dependent Dirac equation with arbitrary electromagnetic fields. We demonstrate that the Lanczos algorithm can yield very precise eigenenergies and allows very precise time propagation of relativistic wave packets. The unboundedness of the Dirac Hamiltonian does not hinder the applicability of the Lanczos algorithm. As the Lanczos algorithm requires only matrix-vector products and inner products, which both can be efficiently parallelized, it is an ideal method for large-scale calculations. The excellent parallelization capabilities are demonstrated by a parallel implementation of the Dirac Lanczos propagator utilizing the Message Passing Interface standard.
Krylov subspace acceleration of waveform relaxation
Lumsdaine, A.; Wu, Deyun
1996-12-31
Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2015-10-01
Numerical methods for fractional differential equations generate full stiffness matrices, which were traditionally solved via Gaussian type direct solvers that require O (N3) of computational work and O (N2) of memory to store where N is the number of spatial grid points in the discretization. We develop a preconditioned fast Krylov subspace iterative method for the efficient and faithful solution of finite volume schemes defined on a locally refined composite mesh for fractional differential equations to resolve boundary layers of the solutions. Numerical results are presented to show the utility of the method.
Portable, parallel, reusable Krylov space codes
Smith, B.; Gropp, W.
1994-12-31
Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
NASA Astrophysics Data System (ADS)
Hwang, Feng-Nan; Cai, Shang-Rong; Shao, Yun-Long; Wu, Jong-Shinn
2010-09-01
We investigate fully parallel Newton-Krylov-Schwarz (NKS) algorithms for solving the large sparse nonlinear systems of equations arising from the finite element discretization of the three-dimensional Poisson-Boltzmann equation (PBE), which is often used to describe the colloidal phenomena of an electric double layer around charged objects in colloidal and interfacial science. The NKS algorithm employs an inexact Newton method with backtracking (INB) as the nonlinear solver in conjunction with a Krylov subspace method as the linear solver for the corresponding Jacobian system. An overlapping Schwarz method as a preconditioner to accelerate the convergence of the linear solver. Two test cases including two isolated charged particles and two colloidal particles in a cylindrical pore are used as benchmark problems to validate the correctness of our parallel NKS-based PBE solver. In addition, a truly three-dimensional case, which models the interaction between two charged spherical particles within a rough charged micro-capillary, is simulated to demonstrate the applicability of our PBE solver to handle a problem with complex geometry. Finally, based on the result obtained from a PC cluster of parallel machines, we show numerically that NKS is quite suitable for the numerical simulation of interaction between colloidal particles, since NKS is robust in the sense that INB is able to converge within a small number of iterations regardless of the geometry, the mesh size, the number of processors. With help of an additive preconditioned Krylov subspace method NKS achieves parallel efficiency of 71% or better on up to a hundred processors for a 3D problem with 5 million unknowns.
Scharz Preconditioners for Krylov Methods: Theory and Practice
Szyld, Daniel B.
2013-05-10
Several numerical methods were produced and analyzed. The main thrust of the work relates to inexact Krylov subspace methods for the solution of linear systems of equations arising from the discretization of partial di erential equa- tions. These are iterative methods, i.e., where an approximation is obtained and at each step. Usually, a matrix-vector product is needed at each iteration. In the inexact methods, this product (or the application of a preconditioner) can be done inexactly. Schwarz methods, based on domain decompositions, are excellent preconditioners for thise systems. We contributed towards their under- standing from an algebraic point of view, developed new ones, and studied their performance in the inexact setting. We also worked on combinatorial problems to help de ne the algebraic partition of the domains, with the needed overlap, as well as PDE-constraint optimization using the above-mentioned inexact Krylov subspace methods.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
NASA Astrophysics Data System (ADS)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
A multigrid Newton-Krylov method for flux-limited radiation diffusion
Rider, W.J.; Knoll, D.A.; Olson, G.L.
1998-09-01
The authors focus on the integration of radiation diffusion including flux-limited diffusion coefficients. The nonlinear integration is accomplished with a Newton-Krylov method preconditioned with a multigrid Picard linearization of the governing equations. They investigate the efficiency of the linear and nonlinear iterative techniques.
Avoiding Communication in Two-Sided Krylov Subspace Methods
2011-08-16
Std Z39-18 Copyright © 2011, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for...TSQR, our communication-avoiding imple- mentations also require an additional kernel to compute the Gram-like matrix, Ṽ TV , where V and Ṽ are O(n...polynomials as P0(z) = 1 P1(z) = 1 2g (z − c) Pk+1(z) = 1 g [ (z − c)Pk(z)− d2 4g Pk−1(z) ] where the coecients g, c and d serve to scale and shift the
Lattice QCD computations: Recent progress with modern Krylov subspace methods
Frommer, A.
1996-12-31
Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.
Preconditioning Newton-Krylor Methods for Variably Saturated Flow
Woodward, C.; Jones, J
2000-01-07
In this paper, we compare the effectiveness of three preconditioning strategies in simulations of variably saturated flow. Using Richards' equation as our model, we solve the nonlinear system using a Newton-Krylov method. Since Krylov solvers can stagnate, resulting in slow convergence, we investigate different strategies of preconditioning the Jacobian system. Our work uses a multigrid method to solve the preconditioning systems, with three different approximations to the Jacobian matrix. One approximation lags the nonlinearities, the second results from discarding selected off-diagonal contributions, and the third matrix considered is the full Jacobian. Results indicate that although the Jacobian is more accurate, its usage as a preconditioning matrix should be limited, as it requires much more storage than the simpler approximations. Also, simply lagging the nonlinearities gives a preconditioning matrix that is almost as effective as the full Jacobian but much easier to compute.
Short Communication: A Parallel Newton-Krylov Method for Navier-Stokes Rotorcraft Codes
NASA Astrophysics Data System (ADS)
Ekici, Kivanc; Lyrintzis, Anastasios S.
2003-05-01
The application of Krylov subspace iterative methods to unsteady three-dimensional Navier-Stokes codes on massively parallel and distributed computing environments is investigated. Previously, the Euler mode of the Navier-Stokes flow solver Transonic Unsteady Rotor Navier-Stokes (TURNS) has been coupled with a Newton-Krylov scheme which uses two Conjugate-Gradient-like (CG) iterative methods. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. Parallel implicit operators are used and compared as preconditioners. Results are presented for two-dimensional and three-dimensional viscous cases. The Message Passing Interface (MPI) protocol is used, because of its portability to various parallel architectures.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Krylov methods for compressible flows
NASA Technical Reports Server (NTRS)
Tidriri, M. D.
1995-01-01
We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.
Conformal mapping and convergence of Krylov iterations
Driscoll, T.A.; Trefethen, L.N.
1994-12-31
Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.
Application of nonlinear Krylov acceleration to radiative transfer problems
Till, A. T.; Adams, M. L.; Morel, J. E.
2013-07-01
The iterative solution technique used for radiative transfer is normally nested, with outer thermal iterations and inner transport iterations. We implement a nonlinear Krylov acceleration (NKA) method in the PDT code for radiative transfer problems that breaks nesting, resulting in more thermal iterations but significantly fewer total inner transport iterations. Using the metric of total inner transport iterations, we investigate a crooked-pipe-like problem and a pseudo-shock-tube problem. Using only sweep preconditioning, we compare NKA against a typical inner / outer method employing GMRES / Newton and find NKA to be comparable or superior. Finally, we demonstrate the efficacy of applying diffusion-based preconditioning to grey problems in conjunction with NKA. (authors)
Improvements in Block-Krylov Ritz Vectors and the Boundary Flexibility Method of Component Synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly Scott
1997-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, proposed by Wilson, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based upon the boundary flexibility vectors of the component. Improvements have been made in the formulation of the initial seed to the Krylov sequence, through the use of block-filtering. A method to shift the Krylov sequence to create Ritz vectors that will represent the dynamic behavior of the component at target frequencies, the target frequency being determined by the applied forcing functions, has been developed. A method to terminate the Krylov sequence has also been developed. Various orthonormalization schemes have been developed and evaluated, including the Cholesky/QR method. Several auxiliary theorems and proofs which illustrate issues in component mode synthesis and loss of orthogonality in the Krylov sequence have also been presented. The resulting methodology is applicable to both fixed and free- interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. The accuracy is found to be comparable to that of component synthesis based upon normal modes, using fewer generalized coordinates. In addition, the block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem. The requirement for less vectors to form the component, coupled with the lower computational expense of calculating these Ritz vectors, combine to create a method more efficient than traditional component mode synthesis.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Projection preconditioning for Lanczos-type methods
Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V.
1996-12-31
We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.
NASA Astrophysics Data System (ADS)
Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María
2014-06-01
We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
Subspace ensembles for classification
NASA Astrophysics Data System (ADS)
Sun, Shiliang; Zhang, Changshui
2007-11-01
Ensemble learning constitutes one of the principal current directions in machine learning and data mining. In this paper, we explore subspace ensembles for classification by manipulating different feature subspaces. Commencing with the nature of ensemble efficacy, we probe into the microcosmic meaning of ensemble diversity, and propose to use region partitioning and region weighting to implement effective subspace ensembles. Individual classifiers possessing eminent performance on a partitioned region reflected by high neighborhood accuracies are deemed to contribute largely to this region, and are assigned large weights in determining the labels of instances in this area. A robust algorithm “Sena” that incarnates the mechanism is presented, which is insensitive to the number of nearest neighbors chosen to calculate neighborhood accuracies. The algorithm exhibits improved performance over the well-known ensembles of bagging, AdaBoost and random subspace. The difference of its effectivity with varying base classifiers is also investigated.
Aliaga, José I.; Alonso, Pedro; Badía, José M.; Chacón, Pablo; Davidović, Davor; López-Blanco, José R.; Quintana-Ortí, Enrique S.
2016-03-15
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousands degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.
An Inexact Newton-Krylov Algorithm for Constrained Diffeomorphic Image Registration.
Mang, Andreas; Biros, George
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H(1)- or H(2)-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton-Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton-Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
Notes on Newton-Krylov based Incompressible Flow Projection Solver
Robert Nourgaliev; Mark Christon; J. Bakosi
2012-09-01
The purpose of the present document is to formulate Jacobian-free Newton-Krylov algorithm for approximate projection method used in Hydra-TH code. Hydra-TH is developed by Los Alamos National Laboratory (LANL) under the auspices of the Consortium for Advanced Simulation of Light-Water Reactors (CASL) for thermal-hydraulics applications ranging from grid-to-rod fretting (GTRF) to multiphase flow subcooled boiling. Currently, Hydra-TH is based on the semi-implicit projection method, which provides an excellent platform for simulation of transient single-phase thermalhydraulics problems. This algorithm however is not efficient when applied for very slow or steady-state problems, as well as for highly nonlinear multiphase problems relevant to nuclear reactor thermalhydraulics with boiling and condensation. These applications require fully-implicit tightly-coupling algorithms. The major technical contribution of the present report is the formulation of fully-implicit projection algorithm which will fulfill this purpose. This includes the definition of non-linear residuals used for GMRES-based linear iterations, as well as physics-based preconditioning techniques.
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
Newton-Krylov-Schwarz methods in unstructured grid Euler flow
Keyes, D.E.
1996-12-31
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton`s method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on an aerodynamic application emphasizing comparisons with a standard defect-correction approach and subdomain preconditioner consistency.
Nonlinear Krylov acceleration of reacting flow codes
Kumar, S.; Rawat, R.; Smith, P.; Pernice, M.
1996-12-31
We are working on computational simulations of three-dimensional reactive flows in applications encompassing a broad range of chemical engineering problems. Examples of such processes are coal (pulverized and fluidized bed) and gas combustion, petroleum processing (cracking), and metallurgical operations such as smelting. These simulations involve an interplay of various physical and chemical factors such as fluid dynamics with turbulence, convective and radiative heat transfer, multiphase effects such as fluid-particle and particle-particle interactions, and chemical reaction. The governing equations resulting from modeling these processes are highly nonlinear and strongly coupled, thereby rendering their solution by traditional iterative methods (such as nonlinear line Gauss-Seidel methods) very difficult and sometimes impossible. Hence we are exploring the use of nonlinear Krylov techniques (such as CMRES and Bi-CGSTAB) to accelerate and stabilize the existing solver. This strategy allows us to take advantage of the problem-definition capabilities of the existing solver. The overall approach amounts to using the SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) method and its variants as nonlinear preconditioners for the nonlinear Krylov method. We have also adapted a backtracking approach for inexact Newton methods to damp the Newton step in the nonlinear Krylov method. This will be a report on work in progress. Preliminary results with nonlinear GMRES have been very encouraging: in many cases the number of line Gauss-Seidel sweeps has been reduced by about a factor of 5, and increased robustness of the underlying solver has also been observed.
An Implicit Energy-Conservative 2D Fokker-Planck Algorithm. II. Jacobian-Free Newton-Krylov Solver
NASA Astrophysics Data System (ADS)
Chacón, L.; Barnes, D. C.; Knoll, D. A.; Miley, G. H.
2000-01-01
Energy-conservative implicit integration schemes for the Fokker-Planck transport equation in multidimensional geometries require inverting a dense, non-symmetric matrix (Jacobian), which is very expensive to store and solve using standard solvers. However, these limitations can be overcome with Newton-Krylov iterative techniques, since they can be implemented Jacobian-free (the Jacobian matrix from Newton's algorithm is never formed nor stored to proceed with the iteration), and their convergence can be accelerated by preconditioning the original problem. In this document, the efficient numerical implementation of an implicit energy-conservative scheme for multidimensional Fokker-Planck problems using multigrid-preconditioned Krylov methods is discussed. Results show that multigrid preconditioning is very effective in speeding convergence and decreasing CPU requirements, particularly in fine meshes. The solver is demonstrated on grids up to 128×128 points in a 2D cylindrical velocity space (vr, vp) with implicit time steps of the order of the collisional time scale of the problem, τ. The method preserves particles exactly, and energy conservation is improved over alternative approaches, particularly in coarse meshes. Typical errors in the total energy over a time period of 10τ remain below a percent.
Exponential-Krylov methods for ordinary differential equations
NASA Astrophysics Data System (ADS)
Tranquilli, Paul; Sandu, Adrian
2014-12-01
This paper develops a new family of exponential time discretization methods called exponential-Krylov (EXPK). The new schemes treat the time discretization and the Krylov-based approximation of exponential matrix-vector products as a single computational process. The classical order conditions theory developed herein accounts for both the temporal and the Krylov approximation errors. Unlike traditional exponential schemes, EXPK methods require the construction of only a single Krylov space at each timestep. The number of basis vectors that guarantee the temporal order of accuracy does not depend on the application at hand. Numerical results show favorable properties of EXPK methods when compared to current exponential schemes.
NASA Astrophysics Data System (ADS)
Jiang, Tian; Zhang, Yong-Tao
2016-04-01
Implicit integration factor (IIF) methods were developed in the literature for solving time-dependent stiff partial differential equations (PDEs). Recently, IIF methods were combined with weighted essentially non-oscillatory (WENO) schemes in Jiang and Zhang (2013) [19] to efficiently solve stiff nonlinear advection-diffusion-reaction equations. The methods can be designed for arbitrary order of accuracy. The stiffness of the system is resolved well and the methods are stable by using time step sizes which are just determined by the non-stiff hyperbolic part of the system. To efficiently calculate large matrix exponentials, Krylov subspace approximation is directly applied to the implicit integration factor (IIF) methods. So far, the IIF methods developed in the literature are multistep methods. In this paper, we develop Krylov single-step IIF-WENO methods for solving stiff advection-diffusion-reaction equations. The methods are designed carefully to avoid generating positive exponentials in the matrix exponentials, which is necessary for the stability of the schemes. We analyze the stability and truncation errors of the single-step IIF schemes. Numerical examples of both scalar equations and systems are shown to demonstrate the accuracy, efficiency and robustness of the new methods.
Recovery Discontinuous Galerkin Jacobian-Free Newton-Krylov Method for All-Speed Flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
A novel numerical algorithm (rDG-JFNK) for all-speed fluid flows with heat conduction and viscosity is introduced. The rDG-JFNK combines the Discontinuous Galerkin spatial discretization with the implicit Runge-Kutta time integration under the Jacobian-free Newton-Krylov framework. We solve fully-compressible Navier-Stokes equations without operator-splitting of hyperbolic, diffusion and reaction terms, which enables fully-coupled high-order temporal discretization. The stability constraint is removed due to the L-stable Explicit, Singly Diagonal Implicit Runge-Kutta (ESDIRK) scheme. The governing equations are solved in the conservative form, which allows one to accurately compute shock dynamics, as well as low-speed flows. For spatial discretization, we develop a “recovery” family of DG, exhibiting nearly-spectral accuracy. To precondition the Krylov-based linear solver (GMRES), we developed an “Operator-Split”-(OS) Physics Based Preconditioner (PBP), in which we transform/simplify the fully-coupled system to a sequence of segregated scalar problems, each can be solved efficiently with Multigrid method. Each scalar problem is designed to target/cluster eigenvalues of the Jacobian matrix associated with a specific physics.
A Newton-Krylov solution to the porous medium equations in the agree code
Ward, A. M.; Seker, V.; Xu, Y.; Downar, T. J.
2012-07-01
In order to improve the convergence of the AGREE code for porous medium, a Newton-Krylov solver was developed for steady state problems. The current three-equation system was expanded and then coupled using Newton's Method. Theoretical behavior predicts second order convergence, while actual behavior was highly nonlinear. The discontinuous derivatives found in both closure and empirical relationships prevented true second order convergence. Agreement between the current solution and new Exact Newton solution was well below the convergence criteria. While convergence time did not dramatically decrease, the required number of outer iterations was reduced by approximately an order of magnitude. GMRES was also used to solve problem, where ILU without fill-in was used to precondition the iterative solver, and the performance was slightly slower than the direct solution. (authors)
An Inexact Newton–Krylov Algorithm for Constrained Diffeomorphic Image Registration*
Mang, Andreas; Biros, George
2016-01-01
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H1- or H2-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton–Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton–Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
Block Krylov-Schur method for large symmetric eigenvalue problems
NASA Astrophysics Data System (ADS)
Zhou, Yunkai; Saad, Yousef
2008-04-01
Stewart's Krylov-Schur algorithm offers two advantages over Sorensen's implicitly restarted Arnoldi (IRA) algorithm. The first is ease of deflation of converged Ritz vectors, the second is the avoidance of the potential forward instability of the QR algorithm. In this paper we develop a block version of the Krylov-Schur algorithm for symmetric eigenproblems. Details of this block algorithm are discussed, including how to handle rank deficient cases and how to use varying block sizes. Numerical results on the efficiency of the block Krylov-Schur method are reported.
Some experiences with Krylov vectors and Lanczos vectors
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Su, Tzu-Jeng; Kim, Hyoung M.
1993-01-01
This paper illustrates the use of Krylov vectors and Lanczos vectors for reduced-order modeling in structural dynamics and for control of flexible structures. Krylov vectors and Lanczos vectors are defined and illustrated, and several applications that have been under study at The University of Texas at Austin are reviewed: model reduction for undamped structural dynamics systems, component mode synthesis using Krylov vectors, model reduction of damped structural dynamics systems, and one-sided and two-sided unsymmetric block-Lanczos model-reduction algorithms.
Subspace Detectors: Efficient Implementation
Harris, D B; Paik, T
2006-07-26
The optimum detector for a known signal in white Gaussian background noise is the matched filter, also known as a correlation detector [Van Trees, 1968]. Correlation detectors offer exquisite sensitivity (high probability of detection at a fixed false alarm rate), but require perfect knowledge of the signal. The sensitivity of correlation detectors is increased by the availability of multichannel data, something common in seismic applications due to the prevalence of three-component stations and arrays. When the signal is imperfectly known, an extension of the correlation detector, the subspace detector, may be able to capture much of the performance of a matched filter [Harris, 2006]. In order to apply a subspace detector, the signal to be detected must be known to lie in a signal subspace of dimension d {ge} 1, which is defined by a set of d linearly-independent basis waveforms. The basis is constructed to span the range of signals anticipated to be emitted by a source of interest. Correlation detectors operate by computing a running correlation coefficient between a template waveform (the signal to be detected) and the data from a window sliding continuously along a data stream. The template waveform and the continuous data stream may be multichannel, as would be true for a three-component seismic station or an array. In such cases, the appropriate correlation operation computes the individual correlations channel-for-channel and sums the result (Figure 1). Both the waveform matching that occurs when a target signal is present and the cross-channel stacking provide processing gain. For a three-component station processing gain occurs from matching the time-history of the signals and their polarization structure. The projection operation that is at the heart of the subspace detector can be expensive to compute if implemented in a straightforward manner, i.e. with direct-form convolutions. The purpose of this report is to indicate how the projection can be
Protected subspace Ramsey spectroscopy
NASA Astrophysics Data System (ADS)
Ostermann, L.; Plankensteiner, D.; Ritsch, H.; Genes, C.
2014-11-01
We study a modified Ramsey spectroscopy technique employing slowly decaying states for quantum metrology applications using dense ensembles. While closely positioned atoms exhibit super-radiant collective decay and dipole-dipole induced frequency shifts, recent results [L. Ostermann, H. Ritsch, and C. Genes, Phys. Rev. Lett. 111, 123601 (2013), 10.1103/PhysRevLett.111.123601] suggest the possibility to suppress such detrimental effects and achieve an even better scaling of the frequency sensitivity with interrogation time than for noninteracting particles. Here we present an in-depth analysis of this "protected subspace Ramsey technique" using improved analytical modeling and numerical simulations including larger three-dimensional (3D) samples. Surprisingly we find that using subradiant states of N particles to encode the atomic coherence yields a scaling of the optimal sensitivity better than 1 /√{N } . Applied to ultracold atoms in 3D optical lattices we predict a precision beyond the single atom linewidth.
Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla
2013-12-01
To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).
Quasi-splitting subspaces and Foulis-Randall subspaces
NASA Astrophysics Data System (ADS)
Buhagiar, D.; Chetcuti, E.; Dvurečenskij, A.
2011-12-01
For a pre-Hilbert space S, let F(S) denote the orthogonally closed subspaces, Eq(S) the quasi-splitting subspaces, E(S) the splitting subspaces, D(S) the Foulis-Randall subspaces, and R(S) the maximal Foulis-Randall subspaces, of S. It was an open problem whether the equalities D(S) = F(S) and E(S) = R(S) hold in general [Cattaneo, G. and Marino, G., "Spectral decomposition of pre-Hilbert spaces as regard to suitable classes of normal closed operators," Boll. Unione Mat. Ital. 6 1-B, 451-466 (1982); Cattaneo, G., Franco, G., and Marino, G., "Ordering of families of subspaces of pre-Hilbert spaces and Dacey pre-Hilbert spaces," Boll. Unione Mat. Ital. 71-B, 167-183 (1987); Dvurečenskij, A., Gleason's Theorem and Its Applications (Kluwer, Dordrecht, 1992), p. 243.]. We prove that the first equality is true and exhibit a pre-Hilbert space S for which the second equality fails. In addition, we characterize complete pre-Hilbert spaces as follows: S is a Hilbert space if, and only if, S has an orthonormal basis and Eq(S) admits a non-free charge.
A Parallel Newton-Krylov-Schur Algorithm for the Reynolds-Averaged Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Osusky, Michal
Aerodynamic shape optimization and multidisciplinary optimization algorithms have the potential not only to improve conventional aircraft, but also to enable the design of novel configurations. By their very nature, these algorithms generate and analyze a large number of unique shapes, resulting in high computational costs. In order to improve their efficiency and enable their use in the early stages of the design process, a fast and robust flow solution algorithm is necessary. This thesis presents an efficient parallel Newton-Krylov-Schur flow solution algorithm for the three-dimensional Navier-Stokes equations coupled with the Spalart-Allmaras one-equation turbulence model. The algorithm employs second-order summation-by-parts (SBP) operators on multi-block structured grids with simultaneous approximation terms (SATs) to enforce block interface coupling and boundary conditions. The discrete equations are solved iteratively with an inexact-Newton method, while the linear system at each Newton iteration is solved using the flexible Krylov subspace iterative method GMRES with an approximate-Schur parallel preconditioner. The algorithm is thoroughly verified and validated, highlighting the correspondence of the current algorithm with several established flow solvers. The solution for a transonic flow over a wing on a mesh of medium density (15 million nodes) shows good agreement with experimental results. Using 128 processors, deep convergence is obtained in under 90 minutes. The solution of transonic flow over the Common Research Model wing-body geometry with grids with up to 150 million nodes exhibits the expected grid convergence behavior. This case was completed as part of the Fifth AIAA Drag Prediction Workshop, with the algorithm producing solutions that compare favourably with several widely used flow solvers. The algorithm is shown to scale well on over 6000 processors. The results demonstrate the effectiveness of the SBP-SAT spatial discretization, which can
NASA Astrophysics Data System (ADS)
Borgelt, Christian
In clustering we often face the situation that only a subset of the available attributes is relevant for forming clusters, even though this may not be known beforehand. In such cases it is desirable to have a clustering algorithm that automatically weights attributes or even selects a proper subset. In this paper I study such an approach for fuzzy clustering, which is based on the idea to transfer an alternative to the fuzzifier (Klawonn and Höppner, What is fuzzy about fuzzy clustering? Understanding and improving the concept of the fuzzifier, In: Proc. 5th Int. Symp. on Intelligent Data Analysis, 254-264, Springer, Berlin, 2003) to attribute weighting fuzzy clustering (Keller and Klawonn, Int J Uncertain Fuzziness Knowl Based Syst 8:735-746, 2000). In addition, by reformulating Gustafson-Kessel fuzzy clustering, a scheme for weighting and selecting principal axes can be obtained. While in Borgelt (Feature weighting and feature selection in fuzzy clustering, In: Proc. 17th IEEE Int. Conf. on Fuzzy Systems, IEEE Press, Piscataway, NJ, 2008) I already presented such an approach for a global selection of attributes and principal axes, this paper extends it to a cluster-specific selection, thus arriving at a fuzzy subspace clustering algorithm (Parsons, Haque, and Liu, 2004).
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
Covariance Modifications to Subspace Bases
Harris, D B
2008-11-19
Adaptive signal processing algorithms that rely upon representations of signal and noise subspaces often require updates to those representations when new data become available. Subspace representations frequently are estimated from available data with singular value (SVD) decompositions. Subspace updates require modifications to these decompositions. Updates can be performed inexpensively provided they are low-rank. A substantial literature on SVD updates exists, frequently focusing on rank-1 updates (see e.g. [Karasalo, 1986; Comon and Golub, 1990, Badeau, 2004]). In these methods, data matrices are modified by addition or deletion of a row or column, or data covariance matrices are modified by addition of the outer product of a new vector. A recent paper by Brand [2006] provides a general and efficient method for arbitrary rank updates to an SVD. The purpose of this note is to describe a closely-related method for applications where right singular vectors are not required. This note also describes the SVD updates to a particular scenario of interest in seismic array signal processing. The particular application involve updating the wideband subspace representation used in seismic subspace detectors [Harris, 2006]. These subspace detectors generalize waveform correlation algorithms to detect signals that lie in a subspace of waveforms of dimension d {ge} 1. They potentially are of interest because they extend the range of waveform variation over which these sensitive detectors apply. Subspace detectors operate by projecting waveform data from a detection window into a subspace specified by a collection of orthonormal waveform basis vectors (referred to as the template). Subspace templates are constructed from a suite of normalized, aligned master event waveforms that may be acquired by a single sensor, a three-component sensor, an array of such sensors or a sensor network. The template design process entails constructing a data matrix whose columns contain the
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; Tuminaro, R. S.; Chacon, L.; Weber, P. D.
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method, and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.
Subspace Arrangement Codes and Cryptosystems
2011-05-09
Signature Date Acceptance for the Trident Scholar Committee Professor Carl E. Wick Associate Director of Midshipmen Research Signature Date SUBSPACE...Professor William Traves. I also thank Professor Carl Wick and the Trident Scholar Committee for providing me with the opportunity to conduct this... Sagan . Why the characteristic polynomial factors. Bulletin of the American Mathematical Society, 36(2):113–133, February 1999. [16] Karen E. Smith
Three-dimensional transient electromagnetic modelling using Rational Krylov methods
NASA Astrophysics Data System (ADS)
Börner, Ralph-Uwe; Ernst, Oliver G.; Güttel, Stefan
2015-09-01
A computational method is given for solving the forward modelling problem for transient electromagnetic exploration. Its key features are the discretization of the quasi-static Maxwell's equations in space using the first-kind family of curl-conforming Nédélec elements combined with time integration using rational Krylov methods. We show how rational Krylov methods can also be used to solve the same problem in the frequency domain followed by a synthesis of the transient solution using the fast Hankel transform, and we argue that the pure time-domain solution is more efficient. We also propose a new surrogate optimization approach for selecting the pole parameters of the rational Krylov method which leads to convergence within an a priori determined number of iterations independent of mesh size and conductivity structure. These poles are repeated in a cyclic fashion, which, in combination with direct solvers for the discrete problem, results in significantly faster solution times than previously proposed schemes.
2012-04-20
NVIDIA, Oracle, and Samsung , U.S. DOE grants DE-SC0003959, DE-AC02-05-CH11231, Lawrence Berkeley National Laboratory, and NSF SDCI under Grant Number OCI...gradient method [19]. Van Rosendale’s implementation was motivated by exposing more parallelism using the PRAM model. Chronopoulous and Gear later created...matrix for no additional communication cost. The additional computation cost is O( s2 ) per s steps. For terms in 2. above, we have 2 choices. The rst
An accelerated subspace iteration for eigenvector derivatives
NASA Technical Reports Server (NTRS)
Ting, Tienko
1991-01-01
An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
A Hybrid, Parallel Krylov Solver for MODFLOW using Schwarz Domain Decomposition
NASA Astrophysics Data System (ADS)
Sutanudjaja, E.; Verkaik, J.; Hughes, J. D.
2015-12-01
In order to support decision makers in solving hydrological problems, detailed high-resolution models are often needed. These models typically consist of a large number of computational cells and have large memory requirements and long run times. An efficient technique for obtaining realistic run times and memory requirements is parallel computing, where the problem is divided over multiple processor cores. The new Parallel Krylov Solver (PKS) for MODFLOW-USG is presented. It combines both distributed memory parallelization by the Message Passing Interface (MPI) and shared memory parallelization by Open Multi-Processing (OpenMP). PKS includes conjugate gradient and biconjugate gradient stabilized linear accelerators that are both preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using the METIS library; b) each subdomain uses local memory only and communicates with other subdomains by MPI within the linear accelerator; c) is fully integrated in the MODFLOW-USG code. PKS is based on the unstructured PCGU-solver, and supports OpenMP. Depending on the available hardware, PKS can run exclusively with MPI, exclusively with OpenMP, or with a hybrid MPI/OpenMP approach. Benchmarks were performed on the Cartesius Dutch supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 144 cores, for a synthetic test (~112 million cells) and the Indonesia groundwater model (~4 million 1km cells). The latter, which includes all islands in the Indonesian archipelago, was built using publically available global datasets, and is an ideal test bed for evaluating the applicability of PKS parallelization techniques to a global groundwater model consisting of multiple continents and islands. Results show that run time reductions can be greatest with the hybrid parallelization approach for the problems tested.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. The Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.
2015-10-15
We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.
The Subspace Voyager: Exploring High-Dimensional Data along a Continuum of Salient 3D Subspace.
Wang, Bing; Mueller, Klaus
2017-02-23
Analyzing high-dimensional data and finding hidden patterns is a difficult problem and has attracted numerous research efforts. Automated methods can be useful to some extent but bringing the data analyst into the loop via interactive visual tools can help the discovery process tremendously. An inherent problem in this effort is that humans lack the mental capacity to truly understand spaces exceeding three spatial dimensions. To keep within this limitation, we describe a framework that decomposes a high-dimensional data space into a continuum of generalized 3D subspaces. Analysts can then explore these 3D subspaces individually via the familiar trackball interface while using additional facilities to smoothly transition to adjacent subspaces for expanded space comprehension. Since the number of such subspaces suffers from combinatorial explosion, we provide a set of data-driven subspace selection and navigation tools which can guide users to interesting subspaces and views. A subspace trail map allows users to manage the explored subspaces, keep their bearings, and return to interesting subspaces and views. Both trackball and trail map are each embedded into a word cloud of attribute labels which aid in navigation. We demonstrate our system via several use cases in a diverse set of application areas - cluster analysis and refinement, information discovery, and supervised training of classifiers. We also report on a user study that evaluates the usability of the various interactions our system provides.
Numerical considerations in computing invariant subspaces
Dongarra, J.J. . Dept. of Computer Science Oak Ridge National Lab., TN ); Hammarling, S. ); Wilkinson, J.H. )
1990-11-01
This paper describes two methods for computing the invariant subspace of a matrix. The first involves using transformations to interchange the eigenvalues; the second involves direct computation of the vectors. 10 refs.
Face recognition with L1-norm subspaces
NASA Astrophysics Data System (ADS)
Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.
2016-05-01
We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.
Numerical solution of large nonsymmetric eigenvalue problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1988-01-01
Several methods are discribed for combinations of Krylov subspace techniques, deflation procedures and preconditionings, for computing a small number of eigenvalues and eigenvectors or Schur vectors of large sparse matrices. The most effective techniques for solving realistic problems from applications are those methods based on some form of preconditioning and one of several Krylov subspace techniques, such as Arnoldi's method or Lanczos procedure. Two forms of preconditioning are considered: shift-and-invert and polynomial acceleration. The latter presents some advantages for parallel/vector processing but may be ineffective if eigenvalues inside the spectrum are sought. Some algorithmic details are provided that improve the reliability and effectiveness of these techniques.
Nonlinear Krylov and moving nodes in the method of lines
NASA Astrophysics Data System (ADS)
Miller, Keith
2005-11-01
We report on some successes and problem areas in the Method of Lines from our work with moving node finite element methods. First, we report on our "nonlinear Krylov accelerator" for the modified Newton's method on the nonlinear equations of our stiff ODE solver. Since 1990 it has been robust, simple, cheap, and automatic on all our moving node computations. We publicize further trials with it here because it should be of great general usefulness to all those solving evolutionary equations. Second, we discuss the need for reliable automatic choice of spatially variable time steps. Third, we discuss the need for robust and efficient iterative solvers for the difficult linearized equations (Jx=b) of our stiff ODE solver. Here, the 1997 thesis of Zulu Xaba has made significant progress.
General purpose nonlinear system solver based on Newton-Krylov method.
2013-12-01
KINSOL is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/Algebraic equation Solvers [1]. KINSOL is a general-purpose nonlinear system solver based on Newton-Krylov and fixed-point solver technologies [2].
NASA Astrophysics Data System (ADS)
Nocera, A.; Alvarez, G.
2016-11-01
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. This paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper then studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases studied indicate that the Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
40 CFR 80.52 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Vehicle preconditioning. 80.52 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.52 Vehicle preconditioning. (a) Initial vehicle preconditioning and preconditioning between tests with different fuels shall be performed...
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l1 -, l2 -, l∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-06-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh-Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
A nonconforming multigrid method using conforming subspaces
NASA Technical Reports Server (NTRS)
Lee, Chang Ock
1993-01-01
For second-order elliptic boundary value problems, we develop a nonconforming multigrid method using the coarser-grid correction on the conforming finite element subspaces. The convergence proof with an arbitrary number of smoothing steps for nu-cycle is presented.
Subspace Identification with Multiple Data Sets
NASA Technical Reports Server (NTRS)
Duchesne, Laurent; Feron, Eric; Paduano, James D.; Brenner, Marty
1995-01-01
Most existing subspace identification algorithms assume that a single input to output data set is available. Motivated by a real life problem on the F18-SRA experimental aircraft, we show how these algorithms are readily adapted to handle multiple data sets. We show by means of an example the relevance of such an improvement.
A property of subspaces admitting spectral synthesis
Abuzyarova, N F
1999-04-30
Let H be the space of holomorphic functions in a convex domain G subset of C. The following result is established: each closed subspace W subset of H that is invariant with respect to the operator of differentiation and admits spectral synthesis can be represented as the solution set of two (possibly coinciding) homogeneous convolution equations.
Preconditioning for traumatic brain injury.
Yokobori, Shoji; Mazzeo, Anna T; Hosein, Khadil; Gajavelli, Shyam; Dietrich, W Dalton; Bullock, M Ross
2013-02-01
Traumatic brain injury (TBI) treatment is now focused on the prevention of primary injury and reduction of secondary injury. However, no single effective treatment is available as yet for the mitigation of traumatic brain damage in humans. Both chemical and environmental stresses applied before injury have been shown to induce consequent protection against post-TBI neuronal death. This concept termed "preconditioning" is achieved by exposure to different pre-injury stressors to achieve the induction of "tolerance" to the effect of the TBI. However, the precise mechanisms underlying this "tolerance" phenomenon are not fully understood in TBI, and therefore even less information is available about possible indications in clinical TBI patients. In this review, we will summarize TBI pathophysiology, and discuss existing animal studies demonstrating the efficacy of preconditioning in diffuse and focal type of TBI. We will also review other non-TBI preconditioning studies, including ischemic, environmental, and chemical preconditioning, which maybe relevant to TBI. To date, no clinical studies exist in this field, and we speculate on possible future clinical situations, in which pre-TBI preconditioning could be considered.
Pharmacologic Preconditioning: Translating the Promise
Gidday, Jeffrey M.
2010-01-01
A transient, ischemia-resistant phenotype known as “ischemic tolerance” can be established in brain in a rapid or delayed fashion by a preceding noninjurious “preconditioning” stimulus. Initial preclinical studies of this phenomenon relied primarily on brief periods of ischemia or hypoxia as preconditioning stimuli, but it was later realized that many other stressors, including pharmacologic ones, are also effective. This review highlights the surprisingly wide variety of drugs now known to promote ischemic tolerance, documented and to some extent mechanistically characterized in preclinical animal models of stroke. Although considerably more experimentation is needed to thoroughly validate the ability of any currently identified preconditioning agent to protect ischemic brain, the fact that some of these drugs are already clinically approved for other indications implies that the growing enthusiasm for translational success in the field of pharmacologic preconditioning may be well justified. PMID:21197121
Angular-Similarity-Preserving Binary Signatures for Linear Subspaces.
Ji, Jianqiu; Li, Jianmin; Tian, Qi; Yan, Shuicheng; Zhang, Bo
2015-11-01
We propose a similarity-preserving binary signature method for linear subspaces. In computer vision and pattern recognition, linear subspace is a very important representation for many kinds of data, such as face images, action and gesture videos, and so on. When there is a large amount of subspace data and the ambient dimension is high, the cost of computing the pairwise similarity between the subspaces would be high and it requires a large storage space for storing the subspaces. In this paper, we first define the angular similarity and angular distance between the subspaces. Then, based on this similarity definition, we develop a similarity-preserving binary signature method for linear subspaces, which transforms a linear subspace into a compact binary signature, and the Hamming distance between two signatures provides an unbiased estimate of the angular similarity between the two subspaces. We also provide a lower bound of the signature length sufficient to guarantee uniform distance-preservation between every pair of subspaces in a set. Experiments on face recognition, gesture recognition, and action recognition verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Scalable parallel Newton-Krylov solvers for discontinuous Galerkin discretizations
Persson, P.-O.
2008-12-31
We present techniques for implicit solution of discontinuous Galerkin discretizations of the Navier-Stokes equations on parallel computers. While a block-Jacobi method is simple and straight-forward to parallelize, its convergence properties are poor except for simple problems. Therefore, we consider Newton-GMRES methods preconditioned with block-incomplete LU factorizations, with optimized element orderings based on a minimum discarded fill (MDF) approach. We discuss the difficulties with the parallelization of these methods, but also show that with a simple domain decomposition approach, most of the advantages of the block-ILU over the block-Jacobi preconditioner are still retained. The convergence is further improved by incorporating the matrix connectivities into the mesh partitioning process, which aims at minimizing the errors introduced from separating the partitions. We demonstrate the performance of the schemes for realistic two- and three-dimensional flow problems.
Ischemic preconditioning. Experimental facts and clinical perspective.
Post, H; Heusch, G
2002-12-01
Brief periods of non-lethal ischemia and reperfusion render the myocardium more resistant to subsequent ischemia. This adaption occurs in a biphasic pattern: the first being active immediately and lasting for 2-3 hrs (early preconditioning), the second starting at 24 hrs until 72 hrs after the initial ischemia (delayed preconditioning) and requiring genomic activation with de novo protein synthesis. Early preconditioning is more potent than delayed preconditioning in reducing infarct size; delayed preconditioning also attenuates myocardial stunning. Early preconditioning depends on the ischemia-induced release of adenosine and opioids and, to a lesser degree, also bradykinin and prostaglandins. These molecules activate G-protein coupled receptors, initiate the activation of KATP channels and generation of oxygen radicals, and stimulate a series of protein kinases with essential roles for protein kinase C, tyrosine kinases and members of the MAP kinase family. Delayed preconditioning is triggered by a similar sequence of events, but in addition essentially depends on eNOS-derived NO. Both early and pharmacological preconditioning can be pharmacologically mimicked by exogenous adenosine, opioids, NO and activators of protein kinase C. Newly synthetized proteins associated with delayed preconditioning comprise iNOS, COX-2, manganese superoxide dismutase and possibly heat shock proteins. The final mechanism of protection by preconditioning is yet unknown; energy metabolism, KATP channels, the sodium-proton exchanger, stabilisation of the cytoskeleton and volume regulation will be discussed. For ethical reasons, evidence for ischemic preconditioning in humans is hard to provide. Clinical findings that parallel experimental ischemic preconditioning are reduced ST-segment elevation and pain during repetitive PTCA or exercise tests, a better prognosis of patients in whom myocardial infarction was preceded by angina, and reduced serum markers of myocardial necrosis after
Subspace controllability of spin-1/2 chains with symmetries
NASA Astrophysics Data System (ADS)
Wang, Xiaoting; Burgarth, Daniel; Schirmer, S.
2016-11-01
We develop a technique to prove simultaneous subspace controllability on multiple invariant subspaces, which specifically enables us study the controllability properties of spin systems that are not amenable to standard controllability arguments based on energy level connectivity graphs or simple induction arguments on the length of the chain. The technique is applied to establish simultaneous subspace controllability for Heisenberg spin chains subject to limited local controls. This model is theoretically important and the controllability result shows that a single control can be sufficient for complete controllability of an exponentially large subspace and universal quantum computation in the exponentially large subspace. The controllability results are extended to prove subspace controllability in the presence of control field leakage and discuss minimal control resources required to achieve controllability over the entire spin chain space.
Optimizing Cubature for Efficient Integration of Subspace Deformations
An, Steven S.; Kim, Theodore; James, Doug L.
2009-01-01
We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR Categories: I.6.8 [Simulation and Modeling]: Types of Simulation—Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis—Quadrature and Numerical Differentiation PMID:19956777
Preconditioning Operators on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nepomnyaschikh, S. V.
1996-01-01
We consider systems of mesh equations that approximate elliptic boundary value problems on arbitrary (unstructured) quasi-uniform triangulations and propose a method for constructing optimal preconditioning operators. The method is based upon two approaches: (1) the fictitious space method, i.e., the reduction of the original problem to a problem in an auxiliary (fictitious) space, and (2) the multilevel decomposition method, i.e., the construction of preconditioners by decomposing functions on hierarchical meshes. The convergence rate of the corresponding iterative process with the preconditioner obtained is independent of the mesh step. The preconditioner has an optimal computational cost: the number of arithmetic operations required for its implementation is proportional to the number of unknowns in the problem. The construction of the preconditioning operators for three dimensional problems can be done in the same way.
A precondition prover for analogy.
Bledsoe, W W
1995-01-01
We describe here a prover PC (precondition) that normally acts as an ordinary theorem prover, but which returns a 'precondition' when it is unable to prove the given formula. If F is the formula attempted to be proved and PC returns the precondition Q, then (Q-->F) is a theorem (that PC can prove). This prover, PC, uses a proof-plan. In its simplest mode, when there is no proof-plan, it acts like ordinary abduction. We show here how this method can be used to derive certain proofs by analogy. To do this, it uses a proof-plan from a given guiding proof to help construct the proof of a similar theorem, by 'debugging' (automatically) that proof-plan. We show here the analogy proofs of a few simple example theorems and one hard pair, Ex4 and Ex4L. The given proof-plan for Ex4 is used by the system to prove automatically Ex4; and that same proof-plan is then used to prove Ex4L, during which the proof-plan is 'debugged' (automatically). These two examples are similar to two other, more difficult, theorems from the theory of resolution, namely GCR (the ground completeness of resolution) and GCLR (the ground completeness of lock resolution). GCR and GCLR have also been handled, in essence, by this system but not completed in all their details.
Preconditioning for traumatic brain injury
Yokobori, Shoji; Mazzeo, Anna T; Hosein, Khadil; Gajavelli, Shyam; Dietrich, W. Dalton; Bullock, M. Ross
2016-01-01
Traumatic brain injury (TBI) treatment is now focused on the prevention of primary injury and reduction of secondary injury. However, no single effective treatment is available as yet for the mitigation of traumatic brain damage in humans. Both chemical and environmental stresses applied before injury, have been shown to induce consequent protection against post-TBI neuronal death. This concept termed “preconditioning” is achieved by exposure to different pre-injury stressors, to achieve the induction of “tolerance” to the effect of the TBI. However, the precise mechanisms underlying this “tolerance” phenomenon are not fully understood in TBI, and therefore even less information is available about possible indications in clinical TBI patients. In this review we will summarize TBI pathophysiology, and discuss existing animal studies demonstrating the efficacy of preconditioning in diffuse and focal type of TBI. We will also review other non-TBI preconditionng studies, including ischemic, environmental, and chemical preconditioning, which maybe relevant to TBI. To date, no clinical studies exist in this field, and we speculate on possible futureclinical situation, in which pre-TBI preconditioning could be considered. PMID:24323189
Subspace Signal Processing in Structured Noise
1990-12-01
shown how a model matrix for a linear model is formed for several cases. Chapter II covers some of the specialized linear algebra necessary for the...symbols in a bold font. and are usually upper case, such as H. The subspace spanned by the 6 columns of a matrix is represented with angle brackets...around the symbol for the matrix , such as (H). A superscript T is used to indicate the transpose of a matrix or vector, such as HT. For complex matrices
The variational subspace valence bond method.
Fletcher, Graham D
2015-04-07
The variational subspace valence bond (VSVB) method based on overlapping orbitals is introduced. VSVB provides variational support against collapse for the optimization of overlapping linear combinations of atomic orbitals (OLCAOs) using modified orbital expansions, without recourse to orthogonalization. OLCAO have the advantage of being naturally localized, chemically intuitive (to individually model bonds and lone pairs, for example), and transferrable between different molecular systems. Such features are exploited to avoid key computational bottlenecks. Since the OLCAO can be doubly occupied, VSVB can access very large problems, and calculations on systems with several hundred atoms are presented.
The variational subspace valence bond method
Fletcher, Graham D.
2015-04-07
The variational subspace valence bond (VSVB) method based on overlapping orbitals is introduced. VSVB provides variational support against collapse for the optimization of overlapping linear combinations of atomic orbitals (OLCAOs) using modified orbital expansions, without recourse to orthogonalization. OLCAO have the advantage of being naturally localized, chemically intuitive (to individually model bonds and lone pairs, for example), and transferrable between different molecular systems. Such features are exploited to avoid key computational bottlenecks. Since the OLCAO can be doubly occupied, VSVB can access very large problems, and calculations on systems with several hundred atoms are presented.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Central subspace dimensionality reduction using covariance operators.
Kim, Minyoung; Pavlovic, Vladimir
2011-04-01
We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
A practical sub-space adaptive filter.
Zaknich, A
2003-01-01
A Sub-Space Adaptive Filter (SSAF) model is developed using, as a basis, the Modified Probabilistic Neural Network (MPNN) and its extension the Tuneable Approximate Piecewise Linear Regression (TAPLR) model. The TAPLR model can be adjusted by a single smoothing parameter continuously from the best piecewise linear model in each sub-space to the best approximately piecewise linear model over the whole data space. A suitable value in between ensures that all neighbouring piecewise linear models merge together smoothly at their boundaries. This model was developed by altering the form of the MPNN, a network used for general nonlinear regression. The MPNNs special structure allows it to be easily used to model a process by appropriately weighting piecewise linear models associated with each of the network's radial basis functions. The model has now been further extended to allow each piecewise linear model section to be adapted separately as new data flows through it. By doing this, the proposed SSAF model represents a learning/filtering method for nonlinear processes that provides one solution to the stability/plasticity dilemma associated with standard adaptive filters.
A simple subspace approach for speech denoising.
Manfredi, C; Daniello, M; Bruscaglioni, P
2001-01-01
For pathological voices, hoarseness is mainly due to airflow turbulence in the vocal tract and is often referred to as noise. This paper focuses on the enhancement of speech signals that are supposedly degraded by additive white noise. Speech enhancement is performed in the time-domain, by means of a fast and reliable subspace approach. A low-order singular value decomposition (SVD) allows separating the signal and the noise contribution in subsequent data frames of the analysed speech signal. The noise component is thus removed from the signal and the filtered signal is reconstructed along the directions spanned by the eigenvectors associated with the signal subspace eigenvalues only, thus giving enhanced voice quality. This approach was tested on synthetic data, showing higher performance in terms of increased SNR when compared with linear prediction (LP) filtering. It was also successfully applied to real data, from hoarse voices of patients that had undergone partial cordectomisation. The simple structure of the proposed technique allows a real-time implementation, suitable for portable device realisation, as an aid to dysphonic speakers. It could be useful for reducing the effort in speaking, which is closely related to social problems due to awkwardness of voice.
Preconditioning of the HiFi Code by Linear Discretization on the Gauss-Lobatto-Legendre Nodes
NASA Astrophysics Data System (ADS)
Glasser, A. H.; Lukin, V. S.
2013-10-01
The most challenging aspect of extended MHD simulation is the scaling of computational time as the problem size is scaled up. The use of high-order spectral elements, as in the HiFi code, is useful for handling multiple length scales and strong anisotropy, but detailed code profiling studies show that cpu time increases rapidly with increasing np, the polynomial degree of the spectral elements, due to the cost of Jacobian matrix formation and solution. We have implemented a method of matrix preconditioning based on linear discretization of the Jacobian matrix on the Gauss-Lobatto-Legendre interpolatory nodes. The resulting matrix has much fewer nonzero elements than the full Jacobian and shares the same vector format. The full solution is then obtained by matrix-free Newton-Krylov methods, which converges rapidly because the preconditioner provides an accurate approximation to the full problem. Scaling studies will be presented for a variety of applications.
Preconditioned iterations to calculate extreme eigenvalues
Brand, C.W.; Petrova, S.
1994-12-31
Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.
Faces from sketches: a subspace synthesis approach
NASA Astrophysics Data System (ADS)
Li, Yung-hui; Savvides, Marios
2006-04-01
In real life scenario, we may be interested in face recognition for identification purpose when we only got sketch of the face images, for example, when police tries to identify criminals based on sketches of suspect, which is drawn by artists according to description of witnesses, what they have in hand is a sketch of suspects, and many real face image acquired from video surveillance. So far the state-of-the-art approach toward this problem tries to transform all real face images into sketches and perform recognition on sketch domain. In our approach we propose the opposite which is a better approach; we propose to generate a realistic face image from the composite sketch using a Hybrid subspace method and then build an illumination tolerant correlation filter which can recognize the person under different illumination variations. We show experimental results on our approach on the CMU PIE (Pose Illumination and Expression) database on the effectiveness of our novel approach.
Robust video hashing via multilinear subspace projections.
Li, Mu; Monga, Vishal
2012-10-01
The goal of video hashing is to design hash functions that summarize videos by short fingerprints or hashes. While traditional applications of video hashing lie in database searches and content authentication, the emergence of websites such as YouTube and DailyMotion poses a challenging problem of anti-piracy video search. That is, hashes or fingerprints of an original video (provided to YouTube by the content owner) must be matched against those uploaded to YouTube by users to identify instances of "illegal" or undesirable uploads. Because the uploaded videos invariably differ from the original in their digital representation (owing to incidental or malicious distortions), robust video hashes are desired. We model videos as order-3 tensors and use multilinear subspace projections, such as a reduced rank parallel factor analysis (PARAFAC) to construct video hashes. We observe that, unlike most standard descriptors of video content, tensor-based subspace projections can offer excellent robustness while effectively capturing the spatio-temporal essence of the video for discriminability. We introduce randomization in the hash function by dividing the video into (secret key based) pseudo-randomly selected overlapping sub-cubes to prevent against intentional guessing and forgery. Detection theoretic analysis of the proposed hash-based video identification is presented, where we derive analytical approximations for error probabilities. Remarkably, these theoretic error estimates closely mimic empirically observed error probability for our hash algorithm. Furthermore, experimental receiver operating characteristic (ROC) curves reveal that the proposed tensor-based video hash exhibits enhanced robustness against both spatial and temporal video distortions over state-of-the-art video hashing techniques.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
1996-01-01
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan; Gropp, W.D.; Keyes, D.E.
1996-12-31
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
40 CFR 1065.518 - Engine preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engine and begin the cold soak as described in § 1065.530(a)(1). (2) Hot-start transient cycle... same ones that apply for emission testing: (1) Cold-start transient cycle. Precondition the engine by running at least one hot-start transient cycle. We will precondition your engine by running two...
A Newton-Krylov Solver for Implicit Solution of Hydrodynamics in Core Collapse Supernovae
Reynolds, D R; Swesty, F D; Woodward, C S
2008-06-12
This paper describes an implicit approach and nonlinear solver for solution of radiation-hydrodynamic problems in the context of supernovae and proto-neutron star cooling. The robust approach applies Newton-Krylov methods and overcomes the difficulties of discontinuous limiters in the discretized equations and scaling of the equations over wide ranges of physical behavior. We discuss these difficulties, our approach for overcoming them, and numerical results demonstrating accuracy and efficiency of the method.
An alternative subspace approach to EEG dipole source localization
NASA Astrophysics Data System (ADS)
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-01
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
An alternative subspace approach to EEG dipole source localization.
Xu, Xiao-Liang; Xu, Bobby; He, Bin
2004-01-21
In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.
Learning Markov Random Walks for robust subspace clustering and estimation.
Liu, Risheng; Lin, Zhouchen; Su, Zhixun
2014-11-01
Markov Random Walks (MRW) has proven to be an effective way to understand spectral clustering and embedding. However, due to less global structural measure, conventional MRW (e.g., the Gaussian kernel MRW) cannot be applied to handle data points drawn from a mixture of subspaces. In this paper, we introduce a regularized MRW learning model, using a low-rank penalty to constrain the global subspace structure, for subspace clustering and estimation. In our framework, both the local pairwise similarity and the global subspace structure can be learnt from the transition probabilities of MRW. We prove that under some suitable conditions, our proposed local/global criteria can exactly capture the multiple subspace structure and learn a low-dimensional embedding for the data, in which giving the true segmentation of subspaces. To improve robustness in real situations, we also propose an extension of the MRW learning model based on integrating transition matrix learning and error correction in a unified framework. Experimental results on both synthetic data and real applications demonstrate that our proposed MRW learning model and its robust extension outperform the state-of-the-art subspace clustering methods.
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
Balian-Low phenomenon for subspace Gabor frames
NASA Astrophysics Data System (ADS)
Gabardo, Jean-Pierre; Han, Deguang
2004-08-01
In this work, the Balian-Low theorem is extended to Gabor (also called Weyl-Heisenberg) frames for subspaces and, more particularly, its relationship with the unique Gabor dual property for subspace Gabor frames is pointed out. To achieve this goal, the subspace Gabor frames which have a unique Gabor dual of type I (resp. type II) are defined and characterized in terms of the Zak transform for the rational parameter case. This characterization is then used to prove the Balian-Low theorem for subspace Gabor frames. Along the same line, the same characterization is used to prove a duality theorem for the unique Gabor dual property which is an analogue of the Ron and Shen duality theorem.
Robust PCA With Partial Subspace Knowledge
NASA Astrophysics Data System (ADS)
Zhan, Jinchun; Vaswani, Namrata
2015-07-01
In recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a low-rank matrix $\\mathbf{L}$ and a sparse matrix $\\mathbf{S}$ from their sum, $\\mathbf{M}:= \\mathbf{L} + \\mathbf{S}$ and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix $\\mathbf{L}$. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modified-PCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modified-PCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given.
NASA Astrophysics Data System (ADS)
Hejranfar, Kazem; Kamali-Moghadam, Ramin
2012-06-01
Preconditioned characteristic boundary conditions (BCs) are implemented at artificial boundaries for the solution of the two- and three-dimensional preconditioned Euler equations at low Mach number flows. The preconditioned compatibility equations and the corresponding characteristic variables (or the Riemann invariants) based on the characteristic forms of preconditioned Euler equations are mathematically derived for three preconditioners proposed by Eriksson, Choi and Merkle, and Turkel. A cell-centered finite volume Roe's method is used for the discretization of the preconditioned system of equations on unstructured meshes. The accuracy and performance of the preconditioned characteristic BCs applied at artificial boundaries are evaluated in comparison with the non-preconditioned characteristic BCs and the simplified BCs in computing steady low Mach number flows. The two-dimensional flow over the NACA0012 airfoil and three-dimensional flow over the hemispherical headform are computed and the results are obtained for different conditions and compared with the available numerical and experimental data. The sensitivity of the solution to the size of computational domain and the variation of the angle of attack for each type of BCs is also examined. Indications are that the preconditioned characteristic BCs implemented in the preconditioned system of Euler equations greatly enhance the convergence rate of the solution of low Mach number flows compared to the other two types of BCs.
Users manual for KSP data-structure-neutral codes implementing Krylov space methods
Gropp, W.; Smith, B.
1994-08-01
The combination of a Krylov space method and a preconditioner is at the heart of most modern numerical codes for the iterative solution of linear systems. This document contains both a users manual and a description of the implementation for the Krylov space methods package KSP included as part of the Portable, Extensible Tools for Scientific computation package (PETSc). PETSc is a large suite of data-structure-neutral libraries for the solution of large-scale problems in scientific computation, in particular on massively parallel computers. The methods in KSP are conjugate gradient method, GMRES, BiCG-Stab, two versions of transpose-free QMR, and others. All of the methods are coded using a common, data-structure-neutral framework and are compatible with the sequential, parallel, and out-of-core solution of linear systems. The codes make no assumptions about the representation of the linear operator; implicitly defined operators (say, calculated using differencing) are fully supported. In addition, unlike all other iterative packages we are aware of, the vector operations are also data-structure neutral. Once certain vector primitives are provided, the same KSP software runs unchanged using any vector storage format. It is not restricted to a few common vector representations. The codes described are actual working codes that run on a large variety of machines including the IBM SP1, Intel DELTA, workstations, networks of workstations, the TMC CM-5, and the CRAY C90. New Krylov space methods may be easily added to the package and used immediately with any application code that has been written using KSP; no changes to the application code are needed.
Solving Nonlinear Solid Mechanics Problems with the Jacobian-Free Newton Krylov Method
J. D. Hales; S. R. Novascone; R. L. Williamson; D. R. Gaston; M. R. Tonks
2012-06-01
The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian is high. These challenges are magnified for multiphysics applications with many coupled variables. Jacobian-free Newton-Krylov (JFNK) methods avoid many of the difficulties associated with the Jacobian by using a finite difference approximation. BISON is a parallel, object-oriented, nonlinear solid mechanics and multiphysics application that leverages JFNK methods. We overview JFNK, outline the capabilities of BISON, and demonstrate the effectiveness of JFNK for solid mechanics and solid mechanics coupled to other PDEs using a series of demonstration problems.
Krylov vector methods for model reduction and control of flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1992-01-01
Krylov vectors and the concept of parameter matching are combined here to develop model-reduction algorithms for structural dynamics systems. The method is derived for a structural dynamics system described by a second-order matrix differential equation. The reduced models are shown to have a promising application in the control of flexible structures. It can eliminate control and observation spillovers while requiring only the dynamic spillover terms to be considered. A model-order reduction example and a flexible structure control example are provided to show the efficacy of the method.
Suppression of spectral anomalies in SSFP-NMR signal by the Krylov Basis Diagonalization Method
NASA Astrophysics Data System (ADS)
Moraes, Tiago Bueno; Santos, Poliana Macedo; Magon, Claudio Jose; Colnago, Luiz Alberto
2014-06-01
Krylov Basis Diagonalization Method (KBDM) is a numerical procedure used to fit time domain signals as a sum of exponentially damped sinusoids. In this work KBDM is used as an alternative spectral analysis tool, complimentary to Fourier transform. We report results obtained from 13C Nuclear Magnetic Resonance (NMR) by Steady State Free Precession (SSFP) measurements in brucine, C23H26N2O4. Results lead to the conclusion that the KBDM can be successfully applied, mainly because it is not influenced by truncation or phase anomalies, as observed in the Fourier transform spectra.
Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD
NASA Technical Reports Server (NTRS)
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.
1998-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.
Globalized Newton-Krylov-Schwarz algorithms and software for parallel implicit CFD.
Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.; Mathematics and Computer Science; Old Dominion Univ.; Iowa State Univ.
2000-01-01
Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz ({psi}NKS) algorithmic framework is presented as a widely applicable answer. This article shows that for the classical problem of three-dimensional transonic Euler flow about an M6 wing, {psi}NKS can simultaneously deliver globalized, asymptotically rapid convergence through adaptive pseudo-transient continuation and Newton's method; reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of {psi}NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. The authors therefore distill several recommendations from their experience and reading of the literature on various algorithmic components of {psi}NKS, and they describe a freely available MPI-based portable parallel software implementation of the solver employed here.
Mitochondrial preconditioning: a potential neuroprotective strategy.
Correia, Sónia C; Carvalho, Cristina; Cardoso, Susana; Santos, Renato X; Santos, Maria S; Oliveira, Catarina R; Perry, George; Zhu, Xiongwei; Smith, Mark A; Moreira, Paula I
2010-01-01
Mitochondria have long been known as the powerhouse of the cell. However, these organelles are also pivotal players in neuronal cell death. Mitochondrial dysfunction is a prominent feature of chronic brain disorders, including Alzheimer's disease (AD) and Parkinson's disease (PD), and cerebral ischemic stroke. Data derived from morphologic, biochemical, and molecular genetic studies indicate that mitochondria constitute a convergence point for neurodegeneration. Conversely, mitochondria have also been implicated in the neuroprotective signaling processes of preconditioning. Despite the precise molecular mechanisms underlying preconditioning-induced brain tolerance are still unclear, mitochondrial reactive oxygen species generation and mitochondrial ATP-sensitive potassium channels activation have been shown to be involved in the preconditioning phenomenon. This review intends to discuss how mitochondrial malfunction contributes to the onset and progression of cerebral ischemic stroke and AD and PD, two major neurodegenerative disorders. The role of mitochondrial mechanisms involved in the preconditioning-mediated neuroprotective events will be also discussed. Mitochondrial targeted preconditioning may represent a promising therapeutic weapon to fight neurodegeneration.
Selective control of the symmetric Dicke subspace in trapped ions
Lopez, C. E.; Retamal, J. C.; Solano, E.
2007-09-15
We propose a method of manipulating selectively the symmetric Dicke subspace in the internal degrees of freedom of N trapped ions. We show that the direct access to ionic-motional subspaces, based on a suitable tuning of motion-dependent ac Stark shifts, induces a two-level dynamics involving previously selected ionic Dicke states. In this manner, it is possible to produce, sequentially and unitarily, ionic Dicke states with increasing excitation number. Moreover, we propose a probabilistic technique to produce directly any ionic Dicke state assuming suitable initial conditions.
Minimal residual method stronger than polynomial preconditioning
Faber, V.; Joubert, W.; Knill, E.
1994-12-31
Two popular methods for solving symmetric and nonsymmetric systems of equations are the minimal residual method, implemented by algorithms such as GMRES, and polynomial preconditioning methods. In this study results are given on the convergence rates of these methods for various classes of matrices. It is shown that for some matrices, such as normal matrices, the convergence rates for GMRES and for the optimal polynomial preconditioning are the same, and for other matrices such as the upper triangular Toeplitz matrices, it is at least assured that if one method converges then the other must converge. On the other hand, it is shown that matrices exist for which restarted GMRES always converges but any polynomial preconditioning of corresponding degree makes no progress toward the solution for some initial error. The implications of these results for these and other iterative methods are discussed.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the
Preconditioning the Helmholtz Equation for Rigid Ducts
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1998-01-01
An innovative hyperbolic preconditioning technique is developed for the numerical solution of the Helmholtz equation which governs acoustic propagation in ducts. Two pseudo-time parameters are used to produce an explicit iterative finite difference scheme. This scheme eliminates the large matrix storage requirements normally associated with numerical solutions to the Helmholtz equation. The solution procedure is very fast when compared to other transient and steady methods. Optimization and an error analysis of the preconditioning factors are present. For validation, the method is applied to sound propagation in a 2D semi-infinite hard wall duct.
Computational Complexity of Subspace Detectors and Matched Field Processing
Harris, D B
2010-12-01
Subspace detectors implement a correlation type calculation on a continuous (network or array) data stream [Harris, 2006]. The difference between subspace detectors and correlators is that the former projects the data in a sliding observation window onto a basis of template waveforms that may have a dimension (d) greater than one, and the latter projects the data onto a single waveform template. A standard correlation detector can be considered to be a degenerate (d=1) form of a subspace detector. Figure 1 below shows a block diagram for the standard formulation of a subspace detector. The detector consists of multiple multichannel correlators operating on a continuous data stream. The correlation operations are performed with FFTs in an overlap-add approach that allows the stream to be processed in uniform, consecutive, contiguous blocks. Figure 1 is slightly misleading for a calculation of computational complexity, as it is possible, when treating all channels with the same weighting (as shown in the figure), to perform the indicated summations in the multichannel correlators before the inverse FFTs and to get by with a single inverse FFT and overlap add calculation per multichannel correlator. In what follows, we make this simplification.
Ordered Subspace Clustering With Block-Diagonal Priors.
Wu, Fei; Hu, Yongli; Gao, Junbin; Sun, Yanfeng; Yin, Baocai
2016-12-01
Many application scenarios involve sequential data, but most existing clustering methods do not well utilize the order information embedded in sequential data. In this paper, we study the subspace clustering problem for sequential data and propose a new clustering method, namely ordered sparse clustering with block-diagonal prior (BD-OSC). Instead of using the sparse normalizer in existing sparse subspace clustering methods, a quadratic normalizer for the data sparse representation is adopted to model the correlation among the data sparse coefficients. Additionally, a block-diagonal prior for the spectral clustering affinity matrix is integrated with the model to improve clustering accuracy. To solve the proposed BD-OSC model, which is a complex optimization problem with quadratic normalizer and block-diagonal prior constraint, an efficient algorithm is proposed. We test the proposed clustering method on several types of databases, such as synthetic subspace data set, human face database, video scene clips, motion tracks, and dynamic 3-D face expression sequences. The experiments show that the proposed method outperforms state-of-the-art subspace clustering methods.
40 CFR 1066.816 - Vehicle preconditioning for FTP testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vehicle preconditioning for FTP testing. 1066.816 Section 1066.816 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... § 1066.816 Vehicle preconditioning for FTP testing. Precondition the test vehicle before the FTP...
Linear unmixing using endmember subspaces and physics based modeling
NASA Astrophysics Data System (ADS)
Gillis, David; Bowles, Jeffrey; Ientilucci, Emmett J.; Messinger, David W.
2007-09-01
One of the biggest issues with the Linear Mixing Model (LMM) is that it is implicitly assumed that each of the individual material components throughout the scene may be described using a single dimension (e.g. an endmember vector). In reality, individual pixels corresponding to the same general material class can exhibit a large degree of variation within a given scene. This is especially true in broad background classes such as forests, where the single dimension assumption clearly fails. In practice, the only way to account for the multidimensionality of the class is to choose multiple (very similar) endmembers, each of which represents some part of the class. To address these issues, we introduce the endmember subgroup model, which generalizes the notion of an 'endmember vector' to an 'endmember subspace'. In this model, spectra in a given hyperspectral scene are decomposed as a sum of constituent materials; however, each material is represented by some multidimensional subspace (instead of a single vector). The dimensionality of the subspace will depend on the within-class variation seen in the image. The endmember subgroups can be determined automatically from the data, or can use physics-based modeling techniques to include 'signature subspaces', which are included in the endmember subgroups. In this paper, we give an overview of the subgroup model; discuss methods for determining the endmember subgroups for a given image, and present results showing how the subgroup model improves upon traditional single endmember linear mixing. We also include results that use the 'signature subspace' approach to identifying mixed-pixel targets in HYDICE imagery.
Preconditioning matrices for Chebyshev derivative operators
NASA Technical Reports Server (NTRS)
Rothman, Ernest E.
1986-01-01
The problem of preconditioning the matrices arising from pseudo-spectral Chebyshev approximations of first order operators is considered in both one and two dimensions. In one dimension a preconditioner represented by a full matrix which leads to preconditioned eigenvalues that are real, positive, and lie between 1 and pi/2, is already available. Since there are cases in which it is not computationally convenient to work with such a preconditioner, a large number of preconditioners were studied which were more sparse (in particular three and four diagonal matrices). The eigenvalues of such preconditioned matrices are compared. The results were applied to the problem of finding the steady state solution to an equation of the type u sub t = u sub x + f, where the Chebyshev collocation is used for the spatial variable and time discretization is performed by the Richardson method. In two dimensions different preconditioners are proposed for the matrix which arises from the pseudo-spectral discretization of the steady state problem. Results are given for the CPU time and the number of iterations using a Richardson iteration method for the unpreconditioned and preconditioned cases.
Health and Nutrition: Preconditions for Educational Achievement.
ERIC Educational Resources Information Center
Negussie, Birgit
This paper discusses the importance of maternal and infant health for children's educational achievement. Education, health, and nutrition are so closely related that changes in one causes changes in the others. Improvement of maternal and preschooler health and nutrition is a precondition for improved educational achievement. Although parental…
Preconditioning and tolerance against cerebral ischaemia
Dirnagl, Ulrich; Becker, Kyra; Meisel, Andreas
2009-01-01
Neuroprotection and brain repair in patients after acute brain damage are still major unfulfilled medical needs. Pharmacological treatments are either ineffective or confounded by adverse effects. Consequently, endogenous mechanisms by which the brain protects itself against noxious stimuli and recovers from damage are being studied. Research on preconditioning, also known as induced tolerance, over the past decade has resulted in various promising strategies for the treatment of patients with acute brain injury. Several of these strategies are being tested in randomised clinical trials. Additionally, research into preconditioning has led to the idea of prophylactically inducing protection in patients such as those undergoing brain surgery and those with transient ischaemic attack or subarachnoid haemorrhage who are at high risk of brain injury in the near future. In this Review, we focus on the clinical issues relating to preconditioning and tolerance in the brain; specifically, we discuss the clinical situations that might benefit from such procedures. We also discuss whether preconditioning and tolerance occur naturally in the brain and assess the most promising candidate strategies that are being investigated. PMID:19296922
SKRYN: A fast semismooth-Krylov-Newton method for controlling Ising spin systems
NASA Astrophysics Data System (ADS)
Ciaramella, G.; Borzì, A.
2015-05-01
The modeling and control of Ising spin systems is of fundamental importance in NMR spectroscopy applications. In this paper, two computer packages, ReHaG and SKRYN, are presented. Their purpose is to set-up and solve quantum optimal control problems governed by the Liouville master equation modeling Ising spin-1/2 systems with pointwise control constraints. In particular, the MATLAB package ReHaG allows to compute a real matrix representation of the master equation. The MATLAB package SKRYN implements a new strategy resulting in a globalized semismooth matrix-free Krylov-Newton scheme. To discretize the real representation of the Liouville master equation, a norm-preserving modified Crank-Nicolson scheme is used. Results of numerical experiments demonstrate that the SKRYN code is able to provide fast and accurate solutions to the Ising spin quantum optimization problem.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
A Newton-Krylov solver for fast spin-up of online ocean tracers
NASA Astrophysics Data System (ADS)
Lindsay, Keith
2017-01-01
We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.
Condition number estimation of preconditioned matrices.
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.
Condition Number Estimation of Preconditioned Matrices
Kushida, Noriyuki
2015-01-01
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method. PMID:25816331
NASA Astrophysics Data System (ADS)
Hayes, Charles E.; McClellan, James H.; Scott, Waymond R.; Kerr, Andrew J.
2016-05-01
This work introduces two advances in wide-band electromagnetic induction (EMI) processing: a novel adaptive matched filter (AMF) and matched subspace detection methods. Both advances make use of recent work with a subspace SVD approach to separating the signal, soil, and noise subspaces of the frequency measurements The proposed AMF provides a direct approach to removing the EMI self-response while improving the signal to noise ratio of the data. Unlike previous EMI adaptive downtrack filters, this new filter will not erroneously optimize the EMI soil response instead of the EMI target response because these two responses are projected into separate frequency subspaces. The EMI detection methods in this work elaborate on how the signal and noise subspaces in the frequency measurements are ideal for creating the matched subspace detection (MSD) and constant false alarm rate matched subspace detection (CFAR) metrics developed by Scharf The CFAR detection metric has been shown to be the uniformly most powerful invariant detector.
Subspace differential coexpression analysis: problem definition and a general approach.
Fang, Gang; Kuang, Rui; Pandey, Gaurav; Steinbach, Michael; Myers, Chad L; Kumar, Vipin
2010-01-01
In this paper, we study methods to identify differential coexpression patterns in case-control gene expression data. A differential coexpression pattern consists of a set of genes that have substantially different levels of coherence of their expression profiles across the two sample-classes, i.e., highly coherent in one class, but not in the other. Biologically, a differential coexpression patterns may indicate the disruption of a regulatory mechanism possibly caused by disregulation of pathways or mutations of transcription factors. A common feature of all the existing approaches for differential coexpression analysis is that the coexpression of a set of genes is measured on all the samples in each of the two classes, i.e., over the full-space of samples. Hence, these approaches may miss patterns that only cover a subset of samples in each class, i.e., subspace patterns, due to the heterogeneity of the subject population and disease causes. In this paper, we extend differential coexpression analysis by defining a subspace differential coexpression pattern, i.e., a set of genes that are coexpressed in a relatively large percent of samples in one class, but in a much smaller percent of samples in the other class. We propose a general approach based upon association analysis framework that allows exhaustive yet efficient discovery of subspace differential coexpression patterns. This approach can be used to adapt a family of biclustering algorithms to obtain their corresponding differential versions that can directly discover differential coexpression patterns. Using a recently developed biclustering algorithm as illustration, we perform experiments on cancer datasets which demonstrates the existence of subspace differential coexpression patterns. Permutation tests demonstrate the statistical significance for a large number of discovered subspace patterns, many of which can not be discovered if they are measured over all the samples in each of the classes
NASA Astrophysics Data System (ADS)
Weston, Brian; Nourgaliev, Robert; Delplanque, Jean-Pierre; Anderson, Andy
2016-11-01
The numerical simulation of flows associated with metal additive manufacturing processes such as selective laser melting and other laser-induced phase change applications present new challenges. Specifically, these flows require a fully compressible formulation since rapid density variations occur due to laser-induced melting and solidification of metal powder. We investigate the preconditioning for a recently developed all-speed compressible Navier-Stokes solver that addresses such challenges. The equations are discretized with a reconstructed Discontinuous Galerkin method and integrated in time with fully implicit discretization schemes. The resulting set of non-linear and linear equations are solved with a robust Newton-Krylov (NK) framework. To enable convergence of the highly ill-conditioned linearized systems, we employ a physics-based operator split preconditioner (PBP), utilizing a robust Schur complement technique. We investigate different options of splitting the physics (field) blocks as well as different block solvers on the reduced preconditioning matrix. We demonstrate that our NK-PBP framework is scalable and converges for high CFL/Fourier numbers on classic problems in fluid dynamics as well as for laser-induced phase change problems.
Smooth local subspace projection for nonlinear noise reduction
Chelidze, David
2014-03-15
Many nonlinear or chaotic time series exhibit an innate broad spectrum, which makes noise reduction difficult. Local projective noise reduction is one of the most effective tools. It is based on proper orthogonal decomposition (POD) and works for both map-like and continuously sampled time series. However, POD only looks at geometrical or topological properties of data and does not take into account the temporal characteristics of time series. Here, we present a new smooth projective noise reduction method. It uses smooth orthogonal decomposition (SOD) of bundles of reconstructed short-time trajectory strands to identify smooth local subspaces. Restricting trajectories to these subspaces imposes temporal smoothness on the filtered time series. It is shown that SOD-based noise reduction significantly outperforms the POD-based method for continuously sampled noisy time series.
Mining visual collocation patterns via self-supervised subspace learning.
Yuan, Junsong; Wu, Ying
2012-04-01
Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively.
LogDet Rank Minimization with Application to Subspace Clustering.
Kang, Zhao; Peng, Chong; Cheng, Jie; Cheng, Qiang
2015-01-01
Low-rank matrix is desired in many machine learning and computer vision problems. Most of the recent studies use the nuclear norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet) function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.
Characterizing Earthquake Clusters in Oklahoma Using Subspace Detectors
NASA Astrophysics Data System (ADS)
McMahon, N. D.; Benz, H.; Aster, R. C.; McNamara, D. E.; Myers, E. K.
2014-12-01
Subspace detection is a powerful and adaptive tool for continuously detecting low signal to noise seismic signals. Subspace detectors improve upon simple cross-correlation/matched filtering techniques by moving beyond the use of a single waveform template to the use of multiple orthogonal waveform templates that effectively span the signals from all previously identified events within a data set. Subspace detectors are particularly useful in event scenarios where a spatially limited source distribution produces earthquakes with highly similar waveforms. In this context, the methodology has been successfully deployed to identify low-frequency earthquakes within non-volcanic tremor, to characterize earthquakes swarms above magma bodies, and for detailed characterization of aftershock sequences. Here we apply a subspace detection methodology to characterize recent earthquakes clusters in Oklahoma. Since 2009, the state has experienced an unprecedented increase in seismicity, which has been attributed by others to recent expansion in deep wastewater injection well activity. Within the last few years, 99% of increased Oklahoma earthquake activity has occurred within 15 km of a Class II injection well. We analyze areas of dense seismic activity in central Oklahoma and construct more complete catalogues for analysis. For a typical cluster, we are able to achieve catalog completeness to near or below magnitude 1 and to continuously document seismic activity for periods of 6 months or more. Our catalog can more completely characterize these clusters in time and space with event numbers, magnitudes, b-values, energy, locations, etc. This detailed examination of swarm events should lead to a better understanding of time varying earthquake processes and hazards in the state of Oklahoma.
Subspace-Based Bayesian Blind Source Separation for Hyperspectral Imagery
2009-12-01
Subspace-based Bayesian blind source separation for hyperspectral imagery Nicolas Dobigeon∗, Saı̈d Moussaoui†, Martial Coulon∗, Jean-Yves Tourneret...In this paper, a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery is in- troduced. Following the...linear mixing model, each pixel spectrum of the hyperspectral image is decomposed as a linear combination of pure endmember spectra. The estimation of
Low complex subspace minimum variance beamformer for medical ultrasound imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2016-03-01
Minimum variance (MV) beamformer enhances the resolution and contrast in the medical ultrasound imaging at the expense of higher computational complexity with respect to the non-adaptive delay-and-sum beamformer. The major complexity arises from the estimation of the L×L array covariance matrix using spatial averaging, which is required to more accurate estimation of the covariance matrix of correlated signals, and inversion of it, which is required for calculating the MV weight vector which are as high as O(L(2)) and O(L(3)), respectively. Reducing the number of array elements decreases the computational complexity but degrades the imaging resolution. In this paper, we propose a subspace MV beamformer which preserves the advantages of the MV beamformer with lower complexity. The subspace MV neglects some rows of the array covariance matrix instead of reducing the array size. If we keep η rows of the array covariance matrix which leads to a thin non-square matrix, the weight vector of the subspace beamformer can be achieved in the same way as the MV obtains its weight vector with lower complexity as high as O(η(2)L). More calculations would be saved because an η×L covariance matrix must be estimated instead of a L×L. We simulated a wire targets phantom and a cyst phantom to evaluate the performance of the proposed beamformer. The results indicate that we can keep about 16 from 43 rows of the array covariance matrix which reduces the order of complexity to 14% while the image resolution is still comparable to that of the standard MV beamformer. We also applied the proposed method to an experimental RF data and showed that the subspace MV beamformer performs like the standard MV with lower computational complexity.
Physiology and pharmacology of myocardial preconditioning.
Raphael, Jacob
2010-03-01
Perioperative myocardial ischemia and infarction are not only major sources of morbidity and mortality in patients undergoing surgery but also important causes of prolonged hospital stay and resource utilization. Ischemic and pharmacological preconditioning and postconditioning have been known for more than two decades to provide protection against myocardial ischemia and reperfusion and limit myocardial infarct size in many experimental animal models, as well as in clinical studies (1-3). This paper will review the physiology and pharmacology of ischemic and drug-induced preconditioning and postconditioning of the myocardium with special emphasis on the mechanisms by which volatile anesthetics provide myocardial protection. Insights gained from animal and clinical studies will be presented and reviewed and recommendations for the use of perioperative anesthetics and medications will be given.
Classification of Polarimetric SAR Image Based on the Subspace Method
NASA Astrophysics Data System (ADS)
Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.
2013-07-01
Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.
Random Subspace Aggregation for Cancer Prediction with Gene Expression Profiles
Yuan, Xiguo; Zhang, Junying
2016-01-01
Background. Precisely predicting cancer is crucial for cancer treatment. Gene expression profiles make it possible to analyze patterns between genes and cancers on the genome-wide scale. Gene expression data analysis, however, is confronted with enormous challenges for its characteristics, such as high dimensionality, small sample size, and low Signal-to-Noise Ratio. Results. This paper proposes a method, termed RS_SVM, to predict gene expression profiles via aggregating SVM trained on random subspaces. After choosing gene features through statistical analysis, RS_SVM randomly selects feature subsets to yield random subspaces and training SVM classifiers accordingly and then aggregates SVM classifiers to capture the advantage of ensemble learning. Experiments on eight real gene expression datasets are performed to validate the RS_SVM method. Experimental results show that RS_SVM achieved better classification accuracy and generalization performance in contrast with single SVM, K-nearest neighbor, decision tree, Bagging, AdaBoost, and the state-of-the-art methods. Experiments also explored the effect of subspace size on prediction performance. Conclusions. The proposed RS_SVM method yielded superior performance in analyzing gene expression profiles, which demonstrates that RS_SVM provides a good channel for such biological data. PMID:27999797
Recursive stochastic subspace identification for structural parameter estimation
NASA Astrophysics Data System (ADS)
Chang, C. C.; Li, Z.
2009-03-01
Identification of structural parameters under ambient condition is an important research topic for structural health monitoring and damage identification. This problem is especially challenging in practice as these structural parameters could vary with time under severe excitation. Among the techniques developed for this problem, the stochastic subspace identification (SSI) is a popular time-domain method. The SSI can perform parametric identification for systems with multiple outputs which cannot be easily done using other time-domain methods. The SSI uses the orthogonal-triangular decomposition (RQ) and the singular value decomposition (SVD) to process measured data, which makes the algorithm efficient and reliable. The SSI however processes data in one batch hence cannot be used in an on-line fashion. In this paper, a recursive SSI method is proposed for on-line tracking of time-varying modal parameters for a structure under ambient excitation. The Givens rotation technique, which can annihilate the designated matrix elements, is used to update the RQ decomposition. Instead of updating the SVD, the projection approximation subspace tracking technique which uses an unconstrained optimization technique to track the signal subspace is employed. The proposed technique is demonstrated on the Phase I ASCE benchmark structure. Results show that the technique can identify and track the time-varying modal properties of the building under ambient condition.
A basis in an invariant subspace of analytic functions
Krivosheev, A S; Krivosheeva, O A
2013-12-31
The existence problem for a basis in a differentiation-invariant subspace of analytic functions defined in a bounded convex domain in the complex plane is investigated. Conditions are found for the solvability of a certain special interpolation problem in the space of entire functions of exponential type with conjugate diagrams lying in a fixed convex domain. These underlie sufficient conditions for the existence of a basis in the invariant subspace. This basis consists of linear combinations of eigenfunctions and associated functions of the differentiation operator, whose exponents are combined into relatively small clusters. Necessary conditions for the existence of a basis are also found. Under a natural constraint on the number of points in the groups, these coincide with the sufficient conditions. That is, a criterion is found under this constraint that a basis constructed from relatively small clusters exists in an invariant subspace of analytic functions in a bounded convex domain in the complex plane. Bibliography: 25 titles.
Relations Among Some Low-Rank Subspace Recovery Models.
Zhang, Hongyang; Lin, Zhouchen; Zhang, Chao; Gao, Junbin
2015-09-01
Recovering intrinsic low-dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, a lot of work has modeled subspace recovery as low-rank minimization problems. We find that some representative models, such as robust principal component analysis (R-PCA), robust low-rank representation (R-LRR), and robust latent low-rank representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low-rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find globally optimal solutions to these low-rank models at an overwhelming probability, although these models are nonconvex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low-complexity randomized algorithms, for example, our novel l2,1 filtering algorithm, to R-PCA. Although for the moment the formal proof of our l2,1 filtering algorithm is not yet available, experiments verify the advantages of our algorithm over other state-of-the-art methods based on the alternating direction method.
Relative perturbation theory: (II) Eigenspace and singular subspace variations
Li, R.-C.
1996-01-20
The classical perturbation theory for Hermitian matrix enigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciporcals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invariant subspaces corresponding to clustered eigenvalues or clustered singular values of much smaller magnitudes than the norms of matrices under considerations when some of these clustered eigenvalues ro clustered singular values are perfectly relatively distinguishable from the rest. This paper considers how eigenvalues of a Hermitian matrix A change when it is perturbed to {tilde A}= D{sup {asterisk}}AD and how singular values of a (nonsquare) matrix B change whenit is perturbed to {tilde B}=D{sub 1}{sup {asterisk}}BD{sub 2}, where D, D{sub 1}, and D{sub 2} are assumed to be close to identity matrices of suitable dimensions, or either D{sub 1} or D{sub 2} close to some unitary matrix. It is proved that under these kinds of perturbations, the change of invarient subspaces are proportional to reciprocals of relative gaps between subsets of spectra or subsets of singular values. We have been able to extend well-known Davis-Kahan sin {theta} theorems and Wedin sin {theta} theorems. As applications, we obtained bounds for perturbations of graded matrices.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less
M-step preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
Preconditioned conjugate gradient methods for solving sparse symmetric and positive finite systems of linear equations are described. Necessary and sufficient conditions are given for when these preconditioners can be used and an analysis of their effectiveness is given. Efficient computer implementations of these methods are discussed and results on the CYBER 203 and the Finite Element Machine under construction at NASA Langley Research Center are included.
Preconditioned minimal residual methods for Chebyshev spectral calculations
NASA Technical Reports Server (NTRS)
Canuto, C.; Quarteroni, A.
1985-01-01
The problem of preconditioning the pseudospectral Chebyshev approximation of an elliptic operator is considered. The numerical sensitiveness to variations of the coefficients of the operator are investigated for two classes of preconditioning matrices: one arising from finite differences, the other from finite elements. The preconditioned system is solved by a conjugate gradient type method, and by a Dufort-Frankel method with dynamical parameters. The methods are compared on some test problems with the Richardson method and with the minimal residual Richardson method.
Semi-supervised subspace learning for Mumford-Shah model based texture segmentation.
Law, Yan Nei; Lee, Hwee Kuan; Yip, Andy M
2010-03-01
We propose a novel image segmentation model which incorporates subspace clustering techniques into a Mumford-Shah model to solve texture segmentation problems. While the natural unsupervised approach to learn a feature subspace can easily be trapped in a local solution, we propose a novel semi-supervised optimization algorithm that makes use of information derived from both the intermediate segmentation results and the regions-of-interest (ROI) selected by the user to determine the optimal subspaces of the target regions. Meanwhile, these subspaces are embedded into a Mumford-Shah objective function so that each segment of the optimal partition is homogeneous in its own subspace. The method outperforms standard Mumford-Shah models since it can separate textures which are less separated in the full feature space. Experimental results are presented to confirm the usefulness of subspace clustering in texture segmentation.
Preconditioning and the limit to the incompressible flow equations
NASA Technical Reports Server (NTRS)
Turkel, E.; Fiterman, A.; Vanleer, B.
1993-01-01
The use of preconditioning methods to accelerate the convergence to a steady state for both the incompressible and compressible fluid dynamic equations are considered. The relation between them for both the continuous problem and the finite difference approximation is also considered. The analysis relies on the inviscid equations. The preconditioning consists of a matrix multiplying the time derivatives. Hence, the steady state of the preconditioned system is the same as the steady state of the original system. For finite difference methods the preconditioning can change and improve the steady state solutions. An application to flow around an airfoil is presented.
Matrix preconditioning: a robust operation for optical linear algebra processors.
Ghosh, A; Paparao, P
1987-07-15
Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.
Implicit preconditioned WENO scheme for steady viscous flow computation
NASA Astrophysics Data System (ADS)
Huang, Juan-Chen; Lin, Herng; Yang, Jaw-Yen
2009-02-01
A class of lower-upper symmetric Gauss-Seidel implicit weighted essentially nonoscillatory (WENO) schemes is developed for solving the preconditioned Navier-Stokes equations of primitive variables with Spalart-Allmaras one-equation turbulence model. The numerical flux of the present preconditioned WENO schemes consists of a first-order part and high-order part. For first-order part, we adopt the preconditioned Roe scheme and for the high-order part, we employ preconditioned WENO methods. For comparison purpose, a preconditioned TVD scheme is also given and tested. A time-derivative preconditioning algorithm is devised and a discriminant is devised for adjusting the preconditioning parameters at low Mach numbers and turning off the preconditioning at intermediate or high Mach numbers. The computations are performed for the two-dimensional lid driven cavity flow, low subsonic viscous flow over S809 airfoil, three-dimensional low speed viscous flow over 6:1 prolate spheroid, transonic flow over ONERA-M6 wing and hypersonic flow over HB-2 model. The solutions of the present algorithms are in good agreement with the experimental data. The application of the preconditioned WENO schemes to viscous flows at all speeds not only enhances the accuracy and robustness of resolving shock and discontinuities for supersonic flows, but also improves the accuracy of low Mach number flow with complicated smooth solution structures.
Ischemic preconditioning protects against gap junctional uncoupling in cardiac myofibroblasts.
Sundset, Rune; Cooper, Marie; Mikalsen, Svein-Ole; Ytrehus, Kirsti
2004-01-01
Ischemic preconditioning increases the heart's tolerance to a subsequent longer ischemic period. The purpose of this study was to investigate the role of gap junction communication in simulated preconditioning in cultured neonatal rat cardiac myofibroblasts. Gap junctional intercellular communication was assessed by Lucifer yellow dye transfer. Preconditioning preserved intercellular coupling after prolonged ischemia. An initial reduction in coupling in response to the preconditioning stimulus was also observed. This may protect neighboring cells from damaging substances produced during subsequent regional ischemia in vivo, and may preserve gap junctional communication required for enhanced functional recovery during subsequent reperfusion.
NASA Astrophysics Data System (ADS)
Jiang, Tian; Zhang, Yong-Tao
2013-11-01
Implicit integration factor (IIF) methods are originally a class of efficient “exactly linear part” time discretization methods for solving time-dependent partial differential equations (PDEs) with linear high order terms and stiff lower order nonlinear terms. For complex systems (e.g. advection-diffusion-reaction (ADR) systems), the highest order derivative term can be nonlinear, and nonlinear nonstiff terms and nonlinear stiff terms are often mixed together. High order weighted essentially non-oscillatory (WENO) methods are often used to discretize the hyperbolic part in ADR systems. There are two open problems on IIF methods for solving ADR systems: (1) how to obtain higher than the second order global time discretization accuracy; (2) how to design IIF methods for solving fully nonlinear PDEs, i.e., the highest order terms are nonlinear. In this paper, we solve these two problems by developing new Krylov IIF-WENO methods to deal with both semilinear and fully nonlinear advection-diffusion-reaction equations. The methods can be designed for arbitrary order of accuracy. The stiffness of the system is resolved well and the methods are stable by using time step sizes which are just determined by the nonstiff hyperbolic part of the system. Large time step size computations are obtained. We analyze the stability and truncation errors of the schemes. Numerical examples of both scalar equations and systems in two and three spatial dimensions are shown to demonstrate the accuracy, efficiency and robustness of the methods.
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective is to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.
Video background tracking and foreground extraction via L1-subspace updates
NASA Astrophysics Data System (ADS)
Pierantozzi, Michele; Liu, Ying; Pados, Dimitris A.; Colonnese, Stefania
2016-05-01
We consider the problem of online foreground extraction from compressed-sensed (CS) surveillance videos. A technically novel approach is suggested and developed by which the background scene is captured by an L1- norm subspace sequence directly in the CS domain. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to outliers, disturbances, and rank selection. Subtraction of the L1-subspace tracked background leads then to effective foreground/moving objects extraction. Experimental studies included in this paper illustrate and support the theoretical developments.
Updating Hawaii Seismicity Catalogs with Systematic Relocations and Subspace Detectors
NASA Astrophysics Data System (ADS)
Okubo, P.; Benz, H.; Matoza, R. S.; Thelen, W. A.
2015-12-01
We continue the systematic relocation of seismicity recorded in Hawai`i by the United States Geological Survey's (USGS) Hawaiian Volcano Observatory (HVO), with interests in adding to the products derived from the relocated seismicity catalogs published by Matoza et al., (2013, 2014). Another goal of this effort is updating the systematically relocated HVO catalog since 2009, when earthquake cataloging at HVO was migrated to the USGS Advanced National Seismic System Quake Management Software (AQMS) systems. To complement the relocation analyses of the catalogs generated from traditional STA/LTA event-triggered and analyst-reviewed approaches, we are also experimenting with subspace detection of events at Kilauea as a means to augment AQMS procedures for cataloging seismicity to lower magnitudes and during episodes of elevated volcanic activity. Our earlier catalog relocations have demonstrated the ability to define correlated or repeating families of earthquakes and provide more detailed definition of seismogenic structures, as well as the capability for improved automatic identification of diverse volcanic seismic sources. Subspace detectors have been successfully applied to cataloging seismicity in situations of low seismic signal-to-noise and have significantly increased catalog sensitivity to lower magnitude thresholds. We anticipate similar improvements using event subspace detections and cataloging of volcanic seismicity that include improved discrimination among not only evolving earthquake sequences but also diverse volcanic seismic source processes. Matoza et al., 2013, Systematic relocation of seismicity on Hawai`i Island from 1992 to 2009 using waveform cross correlation and cluster analysis, J. Geophys. Res., 118, 2275-2288, doi:10.1002/jgrb.580189 Matoza et al., 2014, High-precision relocation of long-period events beneath the summit region of Kīlauea Volcano, Hawai`i, from 1986 to 2009, Geophys. Res. Lett., 41, 3413-3421, doi:10.1002/2014GL059819
Subspace-based analysis of the ERT inverse problem
NASA Astrophysics Data System (ADS)
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Domain-decomposed preconditionings for transport operators
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Gropp, William D.; Keyes, David E.
1991-01-01
The performance was tested of five different interface preconditionings for domain decomposed convection diffusion problems, including a novel one known as the spectral probe, while varying mesh parameters, Reynolds number, ratio of subdomain diffusion coefficients, and domain aspect ratio. The preconditioners are representative of the range of practically computable possibilities that have appeared in the domain decomposition literature for the treatment of nonoverlapping subdomains. It is shown that through a large number of numerical examples that no single preconditioner can be considered uniformly superior or uniformly inferior to the rest, but that knowledge of particulars, including the shape and strength of the convection, is important in selecting among them in a given problem.
Fast permutation preconditioning for fractional diffusion equations.
Wang, Sheng-Feng; Huang, Ting-Zhu; Gu, Xian-Ming; Luo, Wei-Hua
2016-01-01
In this paper, an implicit finite difference scheme with the shifted Grünwald formula, which is unconditionally stable, is used to discretize the fractional diffusion equations with constant diffusion coefficients. The coefficient matrix possesses the Toeplitz structure and the fast Toeplitz matrix-vector product can be utilized to reduce the computational complexity from [Formula: see text] to [Formula: see text], where N is the number of grid points. Two preconditioned iterative methods, named bi-conjugate gradient method for Toeplitz matrix and bi-conjugate residual method for Toeplitz matrix, are proposed to solve the relevant discretized systems. Finally, numerical experiments are reported to show the effectiveness of our preconditioners.
Extremely Intense Magnetospheric Substorms : External Triggering? Preconditioning?
NASA Astrophysics Data System (ADS)
Tsurutani, Bruce; Echer, Ezequiel; Hajra, Rajkumar
2016-07-01
We study particularly intense substorms using a variety of near-Earth spacecraft data and ground observations. We will relate the solar cycle dependences of events, determine whether the supersubstorms are externally or internally triggered, and their relationship to other factors such as magnetospheric preconditioning. If time permits, we will explore the details of the events and whether they are similar to regular (Akasofu, 1964) substorms or not. These intense substorms are an important feature of space weather since they may be responsible for power outages.
Parallel preconditioning techniques for sparse CG solvers
Basermann, A.; Reichel, B.; Schelthoff, C.
1996-12-31
Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.
H(curl) Auxiliary Mesh Preconditioning
Kolev, T V; Pasciak, J E; Vassilevski, P S
2006-08-31
This paper analyzes a two-level preconditioning scheme for H(curl) bilinear forms. The scheme utilizes an auxiliary problem on a related mesh that is more amenable for constructing optimal order multigrid methods. More specifically, we analyze the case when the auxiliary mesh only approximately covers the original domain. The latter assumption is important since it allows for easy construction of nested multilevel spaces on regular auxiliary meshes. Numerical experiments in both two and three space dimensions illustrate the optimal performance of the method.
Towards bulk based preconditioning for quantum dotcomputations
Dongarra, Jack; Langou, Julien; Tomov, Stanimire; Channing,Andrew; Marques, Osni; Vomel, Christof; Wang, Lin-Wang
2006-05-25
This article describes how to accelerate the convergence of Preconditioned Conjugate Gradient (PCG) type eigensolvers for the computation of several states around the band gap of colloidal quantum dots. Our new approach uses the Hamiltonian from the bulk materials constituent for the quantum dot to design an efficient preconditioner for the folded spectrum PCG method. The technique described shows promising results when applied to CdSe quantum dot model problems. We show a decrease in the number of iteration steps by at least a factor of 4 compared to the previously used diagonal preconditioner.
Review of Preconditioning Methods for Fluid Dynamics
1992-09-01
applies equally to all preconditioners e.g. that of Van Leer et. al. which will now be presented. The Van Leer, Lee, Roe preconditioning [55] for...a(dd an artificial viscosity. Accuracy is im1proved for low Mach number flows if the preconditioner is applied only to t lie physical convective and...w j u()s M4u’, h)t appe~var .Jtwurrral ()f C(’oII~putationial Phyisic ,". 11 Iý ( -ltrii. AX .. A .\\itinfi c 7(/Alt .1/1( 1dfr *Si/’olil nq li(o1/ itp
r-principal subspace for driver cognitive state classification.
Almahasneh, Hossam; Kamel, Nidal; Walter, Nicolas; Malik, Aamir Saeed
2015-01-01
Using EEG signals, a novel technique for driver cognitive state assessment is presented, analyzed and experimentally verified. The proposed technique depends on the singular value decomposition (SVD) in finding the distributed energy of the EEG data matrix A in the direction of the r-principal subspace. This distribution is unique and sensitive to the changes in the cognitive state of the driver due to external stimuli, so it is used as a set of features for classification. The proposed technique is tested with 42 subjects using 128 EEG channels and the results show significant improvements in terms of accuracy, specificity, sensitivity, and false detection in comparison to other recently proposed techniques.
Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.
Robinson, David
2014-12-09
A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.
Software for computing eigenvalue bounds for iterative subspace matrix methods
NASA Astrophysics Data System (ADS)
Shepard, Ron; Minkoff, Michael; Zhou, Yunkai
2005-07-01
This paper describes software for computing eigenvalue bounds to the standard and generalized hermitian eigenvalue problem as described in [Y. Zhou, R. Shepard, M. Minkoff, Computing eigenvalue bounds for iterative subspace matrix methods, Comput. Phys. Comm. 167 (2005) 90-102]. The software discussed in this manuscript applies to any subspace method, including Lanczos, Davidson, SPAM, Generalized Davidson Inverse Iteration, Jacobi-Davidson, and the Generalized Jacobi-Davidson methods, and it is applicable to either outer or inner eigenvalues. This software can be applied during the subspace iterations in order to truncate the iterative process and to avoid unnecessary effort when converging specific eigenvalues to a required target accuracy, and it can be applied to the final set of Ritz values to assess the accuracy of the converged results. Program summaryTitle of program: SUBROUTINE BOUNDS_OPT Catalogue identifier: ADVE Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVE Computers: any computer that supports a Fortran 90 compiler Operating systems: any computer that supports a Fortran 90 compiler Programming language: Standard Fortran 90 High speed storage required:5m+5 working-precision and 2m+7 integer for m Ritz values No. of bits in a word: The floating point working precision is parameterized with the symbolic constant WP No. of lines in distributed program, including test data, etc.: 2452 No. of bytes in distributed program, including test data, etc.: 281 543 Distribution format: tar.gz Nature of physical problem: The computational solution of eigenvalue problems using iterative subspace methods has widespread applications in the physical sciences and engineering as well as other areas of mathematical modeling (economics, social sciences, etc.). The accuracy of the solution of such problems and the utility of those errors is a fundamental problem that is of
Ren, C; Gao, X; Steinberg, G K; Zhao, H
2008-02-19
Remote ischemic preconditioning is an emerging concept for stroke treatment, but its protection against focal stroke has not been established. We tested whether remote preconditioning, performed in the ipsilateral hind limb, protects against focal stroke and explored its protective parameters. Stroke was generated by a permanent occlusion of the left distal middle cerebral artery (MCA) combined with a 30 min occlusion of the bilateral common carotid arteries (CCA) in male rats. Limb preconditioning was generated by 5 or 15 min occlusion followed with the same period of reperfusion of the left hind femoral artery, and repeated for two or three cycles. Infarct was measured 2 days later. The results showed that rapid preconditioning with three cycles of 15 min performed immediately before stroke reduced infarct size from 47.7+/-7.6% of control ischemia to 9.8+/-8.6%; at two cycles of 15 min, infarct was reduced to 24.7+/-7.3%; at two cycles of 5 min, infarct was not reduced. Delayed preconditioning with three cycles of 15 min conducted 2 days before stroke also reduced infarct to 23.0+/-10.9%, but with two cycles of 15 min it offered no protection. The protective effects at these two therapeutic time windows of remote preconditioning are consistent with those of conventional preconditioning, in which the preconditioning ischemia is induced in the brain itself. Unexpectedly, intermediate preconditioning with three cycles of 15 min performed 12 h before stroke also reduced infarct to 24.7+/-4.7%, which contradicts the current dogma for therapeutic time windows for the conventional preconditioning that has no protection at this time point. In conclusion, remote preconditioning performed in one limb protected against ischemic damage after focal cerebral ischemia.
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... may be pushed or driven onto the test dynamometer. Acceptable cycles for preconditioning are as follows: (i) Preconditioning may consist of a 505, 866, highway, US06 or SC03 test cycles. (ii) (iii) If...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... may be pushed or driven onto the test dynamometer. Acceptable cycles for preconditioning are as follows: (i) Preconditioning may consist of a 505, 866, highway, US06 or SC03 test cycles. (ii) (iii) If...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... may be pushed or driven onto the test dynamometer. Acceptable cycles for preconditioning are as follows: (i) Preconditioning may consist of a 505, 866, highway, US06 or SC03 test cycles. (ii) (iii) If...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... may be pushed or driven onto the test dynamometer. Acceptable cycles for preconditioning are as follows: (i) Preconditioning may consist of a 505, 866, highway, US06 or SC03 test cycles. (ii) (iii) If...
40 CFR 86.132-00 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... conditioning (SC03) can be run immediately or up to 72 hours after the official FTP and/or evaporative test... may be pushed or driven onto the test dynamometer. Acceptable cycles for preconditioning are as follows: (i) Preconditioning may consist of a 505, 866, highway, US06 or SC03 test cycles. (ii) (iii) If...
A preconditioned formulation of the Cauchy-Riemann equations
NASA Technical Reports Server (NTRS)
Phillips, T. N.
1983-01-01
A preconditioning of the Cauchy-Riemann equations which results in a second-order system is described. This system is shown to have a unique solution if the boundary conditions are chosen carefully. This choice of boundary condition enables the solution of the first-order system to be retrieved. A numerical solution of the preconditioned equations is obtained by the multigrid method.
40 CFR 1066.405 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vehicle preparation and preconditioning. 1066.405 Section 1066.405 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Test § 1066.405 Vehicle preparation and preconditioning. Prepare the vehicle for testing...
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
40 CFR 86.532-78 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.532-78... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.532-78 Vehicle preconditioning. (a) The...
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Conformal Laplace superintegrable systems in 2D: polynomial invariant subspaces
NASA Astrophysics Data System (ADS)
Escobar-Ruiz, M. A.; Miller, Willard, Jr.
2016-07-01
2nd-order conformal superintegrable systems in n dimensions are Laplace equations on a manifold with an added scalar potential and 2n-1 independent 2nd order conformal symmetry operators. They encode all the information about Helmholtz (eigenvalue) superintegrable systems in an efficient manner: there is a 1-1 correspondence between Laplace superintegrable systems and Stäckel equivalence classes of Helmholtz superintegrable systems. In this paper we focus on superintegrable systems in two-dimensions, n = 2, where there are 44 Helmholtz systems, corresponding to 12 Laplace systems. For each Laplace equation we determine the possible two-variate polynomial subspaces that are invariant under the action of the Laplace operator, thus leading to families of polynomial eigenfunctions. We also study the behavior of the polynomial invariant subspaces under a Stäckel transform. The principal new results are the details of the polynomial variables and the conditions on parameters of the potential corresponding to polynomial solutions. The hidden gl 3-algebraic structure is exhibited for the exact and quasi-exact systems. For physically meaningful solutions, the orthogonality properties and normalizability of the polynomials are presented as well. Finally, for all Helmholtz superintegrable solvable systems we give a unified construction of one-dimensional (1D) and two-dimensional (2D) quasi-exactly solvable potentials possessing polynomial solutions, and a construction of new 2D PT-symmetric potentials is established.
A Subspace Method for Dynamical Estimation of Evoked Potentials
Georgiadis, Stefanos D.; Ranta-aho, Perttu O.; Tarvainen, Mika P.; Karjalainen, Pasi A.
2007-01-01
It is a challenge in evoked potential (EP) analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing). We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements. PMID:18288257
Random subspace ensemble for target recognition of ladar range image
NASA Astrophysics Data System (ADS)
Liu, Zheng-Jun; Li, Qi; Wang, Qi
2013-02-01
Laser detection and ranging (ladar) range images have attracted considerable attention in the field of automatic target recognition. Generally, it is difficult to collect a mass of range images for ladar in real applications. However, with small samples, the Hughes effect may occur when the number of features is larger than the size of the training samples. A random subspace ensemble of support vector machine (RSE-SVM) is applied to solve the problem. Three experiments were performed: (1) the performance comparison among affine moment invariants (AMIs), Zernike moment invariants (ZMIs) and their combined moment invariants (CMIs) based on different size training sets using single SVM; (2) the impact analysis of the different number of features about the RSE-SVM and semi-random subspace ensemble of support vector machine; (3) the performance comparison between the RSE-SVM and the CMIs with SVM ensembles. The experiment's results demonstrate that the RSE-SVM is able to relieve the Hughes effect and perform better than ZMIs with single SVM and CMIs with SVM ensembles.
Inhalational Anesthetics as Preconditioning Agents in Ischemic Brain
Wang, Lan; Traystman, Richard J.; Murphy, Stephanie J.
2008-01-01
SUMMARY While many pharmacological agents have been shown to protect the brain from cerebral ischemia in animal models, none have translated successfully to human patients. One potential clinical neuroprotective strategy in humans may involve increasing the brain’s tolerance to ischemia by pre-ischemic conditioning (preconditioning). There are many methods to induce tolerance via preconditioning such as: ischemia itself, pharmacological, hypoxia, endotoxin, and others. Inhalational anesthetic agents have also been shown to result in brain preconditioning. Mechanisms responsible for brain preconditioning are many, complex, and unclear and may involve Akt activation, ATP-sensitive potassium channels, and nitric oxide, amongst many others. Anesthetics, however, may play an important and unique role as preconditioning agents, particularly during the perioperative period. PMID:17962069
ERIC Educational Resources Information Center
Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.
2011-01-01
This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
Crystallizing highly-likely subspaces that contain an unknown quantum state of light
Teo, Yong Siah; Mogilevtsev, Dmitri; Mikhalychev, Alexander; Řeháček, Jaroslav; Hradil, Zdeněk
2016-01-01
In continuous-variable tomography, with finite data and limited computation resources, reconstruction of a quantum state of light is performed on a finite-dimensional subspace. In principle, the data themselves encode all information about the relevant subspace that physically contains the state. We provide a straightforward and numerically feasible procedure to uniquely determine the appropriate reconstruction subspace by extracting this information directly from the data for any given unknown quantum state of light and measurement scheme. This procedure makes use of the celebrated statistical principle of maximum likelihood, along with other validation tools, to grow an appropriate seed subspace into the optimal reconstruction subspace, much like the nucleation of a seed into a crystal. Apart from using the available measurement data, no other assumptions about the source or preconceived parametric model subspaces are invoked. This ensures that no spurious reconstruction artifacts are present in state reconstruction as a result of inappropriate choices of the reconstruction subspace. The procedure can be understood as the maximum-likelihood reconstruction for quantum subspaces, which is an analog to, and fully compatible with that for quantum states. PMID:27905511
Crystallizing highly-likely subspaces that contain an unknown quantum state of light
NASA Astrophysics Data System (ADS)
Teo, Yong Siah; Mogilevtsev, Dmitri; Mikhalychev, Alexander; Řeháček, Jaroslav; Hradil, Zdeněk
2016-12-01
In continuous-variable tomography, with finite data and limited computation resources, reconstruction of a quantum state of light is performed on a finite-dimensional subspace. In principle, the data themselves encode all information about the relevant subspace that physically contains the state. We provide a straightforward and numerically feasible procedure to uniquely determine the appropriate reconstruction subspace by extracting this information directly from the data for any given unknown quantum state of light and measurement scheme. This procedure makes use of the celebrated statistical principle of maximum likelihood, along with other validation tools, to grow an appropriate seed subspace into the optimal reconstruction subspace, much like the nucleation of a seed into a crystal. Apart from using the available measurement data, no other assumptions about the source or preconceived parametric model subspaces are invoked. This ensures that no spurious reconstruction artifacts are present in state reconstruction as a result of inappropriate choices of the reconstruction subspace. The procedure can be understood as the maximum-likelihood reconstruction for quantum subspaces, which is an analog to, and fully compatible with that for quantum states.
Totzeck, Matthias; Hendgen-Cotta, Ulrike B.; French, Brent A.; Rassaf, Tienush
2016-01-01
Although urgently needed in clinical practice, a cardioprotective therapeutic approach against myocardial ischemia/ reperfusion injury remains to be established. Remote ischemic preconditioning (rIPC) and ischemic preconditioning (IPC) represent promising tools comprising three entities: the generation of a protective signal, the transfer of the signal to the target organ, and the response to the transferred signal resulting in cardioprotection. However, in light of recent scientific advances, many controversies arise regarding the efficacy of the underlying signaling. We here show methods for the generation of the signaling cascade by rIPC as well as IPC in a mouse model for in vivo myocardial ischemia/ reperfusion injury using highly reproducible approaches. This is accomplished by taking advantage of easily applicable preconditioning strategies compatible with the clinical setting. We describe methods for using laser Doppler perfusion imaging to monitor the cessation and recovery of perfusion in real time. The effects of preconditioning on cardiac function can also be assessed using ultrasound or magnetic resonance imaging approaches. On a cellular level, we confirm how tissue injury can be monitored using histological assessment of infarct size in conjunction with immunohistochemistry to assess both aspects in a single specimen. Finally, we outline, how the rIPC-associated signaling can be transferred to the target cell via conservation of the signal in the humoral (blood) compartment. This compilation of experimental protocols including a conditioning regimen comparable to the clinical setting should proof useful to both beginners and experts in the field of myocardial infarction, supplying information for the detailed procedures as well as troubleshooting guides. PMID:28066791
Cellular and molecular neurobiology of brain preconditioning.
Cadet, Jean Lud; Krasnova, Irina N
2009-02-01
The tolerant brain which is a consequence of adaptation to repeated nonlethal insults is accompanied by the upregulation of protective mechanisms and the downregulation of prodegenerative pathways. During the past 20 years, evidence has accumulated to suggest that protective mechanisms include increased production of chaperones, trophic factors, and other antiapoptotic proteins. In contrast, preconditioning can cause substantial dampening of the organism's metabolic state and decreased expression of proapoptotic proteins. Recent microarray analyses have also helped to document a role of several molecular pathways in the induction of the brain refractory state. The present review highlights some of these findings and suggests that a better understanding of these mechanisms will inform treatment of a number of neuropsychiatric disorders.
A fast, preconditioned conjugate gradient Toeplitz solver
NASA Technical Reports Server (NTRS)
Pan, Victor; Schrieber, Robert
1989-01-01
A simple factorization is given of an arbitrary hermitian, positive definite matrix in which the factors are well-conditioned, hermitian, and positive definite. In fact, given knowledge of the extreme eigenvalues of the original matrix A, an optimal improvement can be achieved, making the condition numbers of each of the two factors equal to the square root of the condition number of A. This technique is to applied to the solution of hermitian, positive definite Toeplitz systems. Large linear systems with hermitian, positive definite Toeplitz matrices arise in some signal processing applications. A stable fast algorithm is given for solving these systems that is based on the preconditioned conjugate gradient method. The algorithm exploits Toeplitz structure to reduce the cost of an iteration to O(n log n) by applying the fast Fourier Transform to compute matrix-vector products. Matrix factorization is used as a preconditioner.
Remote ischaemic preconditioning: closer to the mechanism?
Gleadle, Jonathan M.; Mazzone, Annette
2016-01-01
Brief periods of ischaemia followed by reperfusion of one tissue such as skeletal muscle can confer subsequent protection against ischaemia-induced injury in other organs such as the heart. Substantial evidence of this effect has been accrued in experimental animal models. However, the translation of this phenomenon to its use as a therapy in ischaemic disease has been largely disappointing without clear evidence of benefit in humans. Recently, innovative experimental observations have suggested that remote ischaemic preconditioning (RIPC) may be largely mediated through hypoxic inhibition of the oxygen-sensing enzyme PHD2, leading to enhanced levels of alpha-ketoglutarate and subsequent increases in circulating kynurenic acid (KYNA). These observations provide vital insights into the likely mechanisms of RIPC and a route to manipulating this mechanism towards therapeutic benefit by direct alteration of KYNA, alpha-ketoglutarate levels, PHD inhibition, or pharmacological targeting of the incompletely understood cardioprotective mechanism activated by KYNA. PMID:28163901
Cellular and Molecular Neurobiology of Brain Preconditioning
Cadet, Jean Lud; Krasnova, Irina N.
2009-01-01
The tolerant brain which is a consequence of adaptation to repeated non-lethal insults is accompanied by the up-regulation of protective mechanisms and the down-regulation of pro-degenerative pathways. During the past 20 years, evidence has accumulated to suggest that protective mechanisms include increased production of chaperones, trophic factors, and other anti-apoptotic proteins. In contrast, preconditioning can cause substantial dampening of the organism’s metabolic state and decreased expression of pro-apoptotic proteins. Recent microarray analyses have also helped to document a role of several molecular pathways in the induction of the brain refractory state. The present review highlights some of these findings and suggests that a better understanding of these mechanisms will inform treatment of a number of neuropsychiatric disorders. PMID:19153843
Remote ischemic preconditioning enhances fracture healing
Çatma, Mehmet Faruk; Şeşen, Hakan; Aydın, Aytekin; Ünlü, Serhan; Demirkale, İsmail; Altay, Murat
2015-01-01
Purpose We hypothesized that RIP accelerates fracture healing. Methods Rats (n = 48) were used for the technique of ischemic preconditioning involved applying 35 min of intermittent pneumatic tourniquet for 7 cycles of 5 min each to the fractured hind limb. Results We observed greater callus maturity in RIP group at first week after fracture when compared to controls (p < 0,0001). The serum MDA levels demonstrated statistically lower values at the RIP group at the first week after fracture; however, there were not significant differences at 3rd and 5th weeks (p = 0.0001, p = 0.725, p = 0.271, respectively). Conclusions Greater callus maturity was obtained in RIP group. PMID:26566314
Hyperbaric oxygen preconditioning attenuates postoperative cognitive impairment in aged rats.
Sun, Li; Xie, Keliang; Zhang, Changsheng; Song, Rui; Zhang, Hong
2014-06-18
Cognitive decline after surgery in the elderly population is a major clinical problem with high morbidity. Hyperbaric oxygen (HBO) preconditioning can induce significant neuroprotection against acute neurological injury. We hypothesized that HBO preconditioning would prevent the development of postoperative cognitive impairment. Elderly male rats (20 months old) underwent stabilized tibial fracture operation under general anesthesia after HBO preconditioning (once a day for 5 days). Separate cohorts of animals were tested for cognitive function with fear conditioning and Y-maze tests, or euthanized at different times to assess the blood-brain barrier integrity, systemic and hippocampal proinflammatory cytokines, and caspase-3 activity. Animals exhibited significant cognitive impairment evidenced by a decreased percentage of freezing time and an increased number of learning trials on days 1, 3, and 7 after surgery, which were significantly prevented by HBO preconditioning. Furthermore, HBO preconditioning significantly ameliorated the increase in serum and hippocampal proinflammatory cytokines tumor necrosis factor-α, interleukin-1 β (IL-1β), IL-6, and high-mobility group protein 1 in surgery-challenged animals. Moreover, HBO preconditioning markedly improved blood-brain barrier integrity and caspase-3 activity in the hippocampus of surgery-challenged animals. These findings suggest that HBO preconditioning could significantly mitigate surgery-induced cognitive impairment, which is strongly associated with the reduction of systemic and hippocampal proinflammatory cytokines and caspase-3 activity.
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
Multiresolution subspace-based optimization method for inverse scattering problems.
Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea
2011-10-01
This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.
Inverse transport calculations in optical imaging with subspace optimization algorithms
NASA Astrophysics Data System (ADS)
Ding, Tian; Ren, Kui
2014-09-01
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.
Holonomic Quantum Computation by Time dependent Decoherence Free Subspaces
NASA Astrophysics Data System (ADS)
Lin, J. N.; Liang, Y.; Yang, H. D.; Gui, J.; Wu, S. L.
2017-01-01
We show how to realize nonadiabatic holonomic quantum computation in time-dependent decoherence free subspaces (TDFSs). In our scheme, the holonomy is not generated by computational bases in DFSs but time-dependent bases of TDFSs. Therefore, different from the traditional DFSs, the ancillary systems are not necessary in inducing holonomy, which saves qubits used in the holonomic quantum computation. We also analyze the symmetry of the N-qubits system which couples to a common squeezed field. The results show that, there are several independent DFSs presented in Hilbert space, which is determined by eigenvalues of Lindblad operators. Combining the scheme and the model proposed in this paper, we show that, the one-qubit controllable phase gate can be realized by only two physical qubits.
Spatial Bell-State Generation without Transverse Mode Subspace Postselection
NASA Astrophysics Data System (ADS)
Kovlakov, E. V.; Bobrov, I. B.; Straupe, S. S.; Kulik, S. P.
2017-01-01
Spatial states of single photons and spatially entangled photon pairs are becoming an important resource in quantum communication. This additional degree of freedom provides an almost unlimited information capacity, making the development of high-quality sources of spatial entanglement a well-motivated research direction. We report an experimental method for generation of photon pairs in a maximally entangled spatial state. In contrast to existing techniques, the method does not require postselection of a particular subspace of spatial modes and allows one to use the full photon flux from the nonlinear crystal, providing a tool for creating high-brightness sources of pure spatially entangled photons. Such sources are a prerequisite for emerging applications in free-space quantum communication.
Universal quantum computation in waveguide QED using decoherence free subspaces
NASA Astrophysics Data System (ADS)
Paulisch, V.; Kimble, H. J.; González-Tudela, A.
2016-04-01
The interaction of quantum emitters with one-dimensional photon-like reservoirs induces strong and long-range dissipative couplings that give rise to the emergence of the so-called decoherence free subspaces (DFSs) which are decoupled from dissipation. When introducing weak perturbations on the emitters, e.g., driving, the strong collective dissipation enforces an effective coherent evolution within the DFS. In this work, we show explicitly how by introducing single-site resolved drivings, we can use the effective dynamics within the DFS to design a universal set of one and two-qubit gates within the DFS of an ensemble of two-level atom-like systems. Using Liouvillian perturbation theory we calculate the scaling with the relevant figures of merit of the systems, such as the Purcell factor and imperfect control of the drivings. Finally, we compare our results with previous proposals using atomic Λ systems in leaky cavities.
Holonomic Quantum Computation by Time dependent Decoherence Free Subspaces
NASA Astrophysics Data System (ADS)
Lin, J. N.; Liang, Y.; Yang, H. D.; Gui, J.; Wu, S. L.
2017-04-01
We show how to realize nonadiabatic holonomic quantum computation in time-dependent decoherence free subspaces (TDFSs). In our scheme, the holonomy is not generated by computational bases in DFSs but time-dependent bases of TDFSs. Therefore, different from the traditional DFSs, the ancillary systems are not necessary in inducing holonomy, which saves qubits used in the holonomic quantum computation. We also analyze the symmetry of the N-qubits system which couples to a common squeezed field. The results show that, there are several independent DFSs presented in Hilbert space, which is determined by eigenvalues of Lindblad operators. Combining the scheme and the model proposed in this paper, we show that, the one-qubit controllable phase gate can be realized by only two physical qubits.
Preconditioned Minimal Residual Methods for Chebyshev Spectral Caluclations
NASA Technical Reports Server (NTRS)
Canuto, C.; Quarteroni, A.
1983-01-01
The problem of preconditioning the pseudospectral Chebyshev approximation of an elliptic operator is considered. The numerical sensitiveness to variations of the coefficients of the operator are investigated for two classes of preconditioning matrices: one arising from finite differences, the other from finite elements. The preconditioned system is solved by a conjugate gradient type method, and by a DuFort-Frankel method with dynamical parameters. The methods are compared on some test problems with the Richardson method and with the minimal residual Richardson method.
Preconditioning principles for preventing sports injuries in adolescents and children.
Dollard, Mark D; Pontell, David; Hallivis, Robert
2006-01-01
Preseason preconditioning can be accomplished well over a 4-week period with a mandatory period of rest as we have discussed. Athletic participation must be guided by a gradual increase of skills performance in the child assessed after a responsible preconditioning program applying physiologic parameters as outlined. Clearly, designing a preconditioning program is a dynamic process when accounting for all the variables in training discussed so far. Despite the physiologic demands of sport and training, we still need to acknowledge the psychologic maturity and welfare of the child so as to ensure that the sport environment is a wholesome and emotionally rewarding experience.
Maximum Likelihood Estimation for Multiple Camera Target Tracking on Grassmann Tangent Subspace.
Amini-Omam, Mojtaba; Torkamani-Azar, Farah; Ghorashi, Seyed Ali
2016-11-15
In this paper, we introduce a likelihood model for tracking the location of object in multiple view systems. Our proposed model transforms conventional nonlinear Euclidean estimation model to an estimation model based on the manifold tangent subspace. In this paper, we show that by decomposition of input noise into two parts and description of model by exponential map, real observations in the Euclidean geometry can be transformed to the manifold tangent subspace. Moreover, by obtained tangent subspace likelihood function, we propose two iterative and noniterative maximum likelihood estimation approaches which numerical results show their good performance.
Riemannian Optimization Method on Generalized Flag Manifolds for Complex and Subspace ICA
NASA Astrophysics Data System (ADS)
Nishimori, Yasunori; Akaho, Shotaro; Plumbley, Mark D.
2006-11-01
In this paper we introduce a new class of manifolds, generalized flag manifolds, for the complex and subspace ICA problems. A generalized flag manifold is a manifold consisting of subspaces which are orthogonal to each other. The class of generalized flag manifolds include the class of Grassmann manifolds. We extend the Riemannian optimization method to include this new class of manifolds by deriving the formulas for the natural gradient and geodesics on these manifolds. We show how the complex and subspace ICA problems can be solved by optimization of cost functions on a generalized flag manifold. Computer simulations demonstrate our algorithm gives good performance compared with the ordinary gradient descent method.
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the... be sealed. (f) The boat must be keel down in the water. (g) The boat must be swamped, allowing...
33 CFR 183.320 - Preconditioning for tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Engines of 2 Horsepower or Less General § 183.320 Preconditioning for tests. A boat must meet the... be sealed. (f) The boat must be keel down in the water. (g) The boat must be swamped, allowing...
40 CFR 1065.516 - Sample system decontamination and preconditioning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Cycles § 1065.516 Sample system decontamination and preconditioning. This section describes how to manage... purified air or nitrogen. (3) When calculating zero emission levels, apply all applicable...
The Galvanotactic Migration of Keratinocytes is Enhanced by Hypoxic Preconditioning
Guo, Xiaowei; Jiang, Xupin; Ren, Xi; Sun, Huanbo; Zhang, Dongxia; Zhang, Qiong; Zhang, Jiaping; Huang, Yuesheng
2015-01-01
The endogenous electric field (EF)-directed migration of keratinocytes (galvanotaxis) into wounds is an essential step in wound re-epithelialization. Hypoxia, which occurs immediately after injury, acts as an early stimulus to initiate the healing process; however, the mechanisms for this effect, remain elusive. We show here that the galvanotactic migration of keratinocytes was enhanced by hypoxia preconditioning as a result of the increased directionality rather than the increased motility of keratinocytes. This enhancement was both oxygen tension- and preconditioning time-dependent, with the maximum effects achieved using 2% O2 preconditioning for 6 hours. Hypoxic preconditioning (2% O2, 6 hours) decreased the threshold voltage of galvanotaxis to < 25 mV/mm, whereas this value was between 25 and 50 mV/mm in the normal culture control. In a scratch-wound monolayer assay in which the applied EF was in the default healing direction, hypoxic preconditioning accelerated healing by 1.38-fold compared with the control conditions. Scavenging of the induced ROS by N-acetylcysteine (NAC) abolished the enhanced galvanotaxis and the accelerated healing by hypoxic preconditioning. Our data demonstrate a novel and unsuspected role of hypoxia in supporting keratinocyte galvanotaxis. Enhancing the galvanotactic response of cells might therefore be a clinically attractive approach to induce improved wound healing. PMID:25988491
A Weakest Precondition Approach to Robustness
NASA Astrophysics Data System (ADS)
Balliu, Musard; Mastroeni, Isabella
With the increasing complexity of information management computer systems, security becomes a real concern. E-government, web-based financial transactions or military and health care information systems are only a few examples where large amount of information can reside on different hosts distributed worldwide. It is clear that any disclosure or corruption of confidential information in these contexts can result fatal. Information flow controls constitute an appealing and promising technology to protect both data confidentiality and data integrity. The certification of the security degree of a program that runs in untrusted environments still remains an open problem in the area of language-based security. Robustness asserts that an active attacker, who can modify program code in some fixed points (holes), is unable to disclose more private information than a passive attacker, who merely observes unclassified data. In this paper, we extend a method recently proposed for checking declassified non-interference in presence of passive attackers only, in order to check robustness by means of weakest precondition semantics. In particular, this semantics simulates the kind of analysis that can be performed by an attacker, i.e., from public output towards private input. The choice of semantics allows us to distinguish between different attacks models and to characterize the security of applications in different scenarios.
Responsive corneosurfametry following in vivo skin preconditioning.
Uhoda, E; Goffin, V; Pierard, G E
2003-12-01
Skin is subjected to many environmental threats, some of which altering the structure and function of the stratum corneum. Among them, surfactants are recognized factors that may influence irritant contact dermatitis. The present study was conducted to compare the variations in skin capacitance and corneosurfametry (CSM) reactivity before and after skin exposure to repeated subclinical injuries by 2 hand dishwashing liquids. A forearm immersion test was performed on 30 healthy volunteers. 2 daily soak sessions were performed for 5 days. At inclusion and the day following the last soak session, skin capacitance was measured and cyanoacrylate skin-surface strippings were harvested. The latter specimens were used for the ex vivo microwave CSM. Both types of assessments clearly differentiated the 2 hand dishwashing liquids. The forearm immersion test allowed the discriminant sensitivity of CSM to increase. Intact skin capacitance did not predict CSM data. By contrast, a significant correlation was found between the post-test conductance and the corresponding CSM data. In conclusion, a forearm immersion test under realistic conditions can discriminate the irritation potential between surfactant-based products by measuring skin conductance and performing CSM. In vivo skin preconditioning by surfactants increases CSM sensitivity to the same surfactants.
Preconditioning and postconditioning: new strategies for cardioprotection.
Hausenloy, D J; Yellon, D M
2008-06-01
Despite optimal therapy, the morbidity and mortality of coronary heart disease (CHD) remains significant, particularly in patients with diabetes or the metabolic syndrome. New strategies for cardioprotection are therefore required to improve the clinical outcomes in patients with CHD. Ischaemic preconditioning (IPC) as a cardioprotective strategy has not fulfilled it clinical potential, primarily because of the need to intervene before the index ischaemic event, which is impossible to predict in patients presenting with an acute myocardial infarction (AMI). However, emerging studies suggest that IPC-induced protection is mediated in part by signalling transduction pathways recruited at time of myocardial reperfusion, creating the possibility of harnessing its cardioprotective potential by intervening at time of reperfusion. In this regard, the recently described phenomenon of ischaemic postconditioning (IPost) has attracted great interest, particularly as it represents an intervention, which can be applied at time of myocardial reperfusion for patients presenting with an AMI. Interestingly, the signal transduction pathways, which underlie its protection, are similar to those recruited by IPC, creating a potential common cardioprotective pathway, which can be recruited at time of myocardial reperfusion, through the use of appropriate pharmacological agents given as adjuvant therapy to current myocardial reperfusion strategies such as thrombolysis and primary percutaneous coronary intervention for patients presenting with an AMI. This article provides a brief overview of IPC and IPost and describes the common signal transduction pathway they both appear to recruit at time of myocardial reperfusion, the pharmacological manipulation of which has the potential to generate new strategies for cardioprotection.
Singh, Amritpal; Randhawa, Puneet Kaur; Bali, Anjana; Singh, Nirmal; Jaggi, Amteshwar Singh
2017-02-14
The cardioprotective effects of remote hind limb preconditioning (RIPC) are well known, but mechanisms by which protection occurs still remain to be explored. Therefore, the present study was designed to investigate the role of TRPV and CGRP in adenosine and remote preconditioning-induced cardioprotection, using sumatriptan, a CGRP release inhibitor and ruthenium red, a TRPV inhibitor, in rats. For remote preconditioning, a pressure cuff was tied around the hind limb of the rat and was inflated with air up to 150 mmHg to produce ischemia in the hind limb and during reperfusion pressure was released. Four cycles of ischemia and reperfusion, each consisting of 5 min of inflation and 5 min of deflation of pressure cuff were used to produce remote limb preconditioning. An ex vivo Langendorff's isolated rat heart model was used to induce ischemia reperfusion injury by 30 min of global ischemia followed by 120 min of reperfusion. RIPC demonstrated a significant decrease in ischemia reperfusion-induced significant myocardial injury in terms of increase in LDH, CK, infarct size and decrease in LVDP, +dp/dtmax and -dp/dtmin. Moreover, pharmacological preconditioning with adenosine produced cardioprotective effects in a similar manner to RIPC. Pretreatment with sumatriptan, a CGRP release blocker, abolished RIPC and adenosine preconditioning-induced cardioprotective effects. Administration of ruthenium red, a TRPV inhibitor, also abolished adenosine preconditioning-induced cardioprotection. It may be proposed that the cardioprotective effects of adenosine and remote preconditioning are possibly mediated through activation of a TRPV channels and consequent, release of CGRP.
Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces
Chen, B Y; Lemmond, T D; Hanley, W G
2008-10-06
This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.
Subspace learning for Mumford-Shah-model-based texture segmentation through texture patches.
Law, Yan Nei; Lee, Hwee Kuan; Yip, Andy M
2011-07-20
In this paper, we develop a robust and effective algorithm for texture segmentation and feature selection. The approach is to incorporate a patch-based subspace learning technique into the subspace Mumford-Shah (SMS) model to make the minimization of the SMS model robust and accurate. The proposed method is fully unsupervised in that it removes the need to specify training data, which is required by existing methods for the same model. We further propose a novel (to our knowledge) pairwise dissimilarity measure for pixels. Its novelty lies in the use of the relevance scores of the features of each pixel to improve its discriminating power. Some superior results are obtained compared to existing unsupervised algorithms, which do not use a subspace approach. This confirms the usefulness of the subspace approach and the proposed unsupervised algorithm.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; Bremer, P. -T.; Pascucci, V.
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.; Bremer, Peer -Timo; Pascucci, Valerio
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
Joseph, Ilon
2014-05-27
Jacobian-free Newton-Krylov (JFNK) algorithms are a potentially powerful class of methods for solving the problem of coupling codes that address dfferent physics models. As communication capability between individual submodules varies, different choices of coupling algorithms are required. The more communication that is available, the more possible it becomes to exploit the simple sparsity pattern of the Jacobian, albeit of a large system. The less communication that is available, the more dense the Jacobian matrices become and new types of preconditioners must be sought to efficiently take large time steps. In general, methods that use constrained or reduced subsystems can offer a compromise in complexity. The specific problem of coupling a fluid plasma code to a kinetic neutrals code is discussed as an example.
Subspace learning of dynamics on a shape manifold: a generative modeling approach.
Yi, Sheng; Krim, Hamid
2014-11-01
In this paper, we propose a novel subspace learning algorithm of shape dynamics. Compared to the previous works, our method is invertible and better characterizes the nonlinear geometry of a shape manifold while retaining a good computational efficiency. In this paper, using a parallel moving frame on a shape manifold, each path of shape dynamics is uniquely represented in a subspace spanned by the moving frame, given an initial condition (the starting point and starting frame). Mathematically, such a representation may be formulated as solving a manifold-valued differential equation, which provides a generative modeling of high-dimensional shape dynamics in a lower dimensional subspace. Given the parallelism and a path on a shape manifold, the parallel moving frame along the path is uniquely determined up to the choice of the starting frame. With an initial frame, we minimize the reconstruction error from the subspace to shape manifold. Such an optimization characterizes well the Riemannian geometry of the manifold by imposing parallelism (equivalent as a Riemannian metric) constraints on the moving frame. The parallelism in this paper is defined by a Levi-Civita connection, which is consistent with the Riemannian metric of the shape manifold. In the experiments, the performance of the subspace learning is extensively evaluated using two scenarios: 1) how the high dimensional geometry is characterized in the subspace and 2) how the reconstruction compares with the original shape dynamics. The results demonstrate and validate the theoretical advantages of the proposed approach.
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.
2016-06-01
Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.
Pattern recognition using maximum likelihood estimation and orthogonal subspace projection
NASA Astrophysics Data System (ADS)
Islam, M. M.; Alam, M. S.
2006-08-01
Hyperspectral sensor imagery (HSI) is a relatively new area of research, however, it is extensively being used in geology, agriculture, defense, intelligence and law enforcement applications. Much of the current research focuses on the object detection with low false alarm rate. Over the past several years, many object detection algorithms have been developed which include linear detector, quadratic detector, adaptive matched filter etc. In those methods the available data cube was directly used to determine the background mean and the covariance matrix, assuming that the number of object pixels is low compared to that of the data pixels. In this paper, we have used the orthogonal subspace projection (OSP) technique to find the background matrix from the given image data. Our algorithm consists of three parts. In the first part, we have calculated the background matrix using the OSP technique. In the second part, we have determined the maximum likelihood estimates of the parameters. Finally, we have considered the likelihood ratio, commonly known as the Neyman Pearson quadratic detector, to recognize the objects. The proposed technique has been investigated via computer simulation where excellent performance has been observed.
Randomized Subspace Learning for Proline Cis-Trans Isomerization Prediction.
Al-Jarrah, Omar Y; Yoo, Paul D; Taha, Kamal; Muhaidat, Sami; Shami, Abdallah; Zaki, Nazar
2015-01-01
Proline residues are common source of kinetic complications during folding. The X-Pro peptide bond is the only peptide bond for which the stability of the cis and trans conformations is comparable. The cis-trans isomerization (CTI) of X-Pro peptide bonds is a widely recognized rate-limiting factor, which can not only induces additional slow phases in protein folding but also modifies the millisecond and sub-millisecond dynamics of the protein. An accurate computational prediction of proline CTI is of great importance for the understanding of protein folding, splicing, cell signaling, and transmembrane active transport in both the human body and animals. In our earlier work, we successfully developed a biophysically motivated proline CTI predictor utilizing a novel tree-based consensus model with a powerful metalearning technique and achieved 86.58 percent Q2 accuracy and 0.74 Mcc, which is a better result than the results (70-73 percent Q2 accuracies) reported in the literature on the well-referenced benchmark dataset. In this paper, we describe experiments with novel randomized subspace learning and bootstrap seeding techniques as an extension to our earlier work, the consensus models as well as entropy-based learning methods, to obtain better accuracy through a precise and robust learning scheme for proline CTI prediction.
Bohmian dynamics on subspaces using linearized quantum force.
Rassolov, Vitaly A; Garashchuk, Sophya
2004-04-15
In the de Broglie-Bohm formulation of quantum mechanics the time-dependent Schrodinger equation is solved in terms of quantum trajectories evolving under the influence of quantum and classical potentials. For a practical implementation that scales favorably with system size and is accurate for semiclassical systems, we use approximate quantum potentials. Recently, we have shown that optimization of the nonclassical component of the momentum operator in terms of fitting functions leads to the energy-conserving approximate quantum potential. In particular, linear fitting functions give the exact time evolution of a Gaussian wave packet in a locally quadratic potential and can describe the dominant quantum-mechanical effects in the semiclassical scattering problems of nuclear dynamics. In this paper we formulate the Bohmian dynamics on subspaces and define the energy-conserving approximate quantum potential in terms of optimized nonclassical momentum, extended to include the domain boundary functions. This generalization allows a better description of the non-Gaussian wave packets and general potentials in terms of simple fitting functions. The optimization is performed independently for each domain and each dimension. For linear fitting functions optimal parameters are expressed in terms of the first and second moments of the trajectory distribution. Examples are given for one-dimensional anharmonic systems and for the collinear hydrogen exchange reaction.
Independent vector analysis using subband and subspace nonlinearity
NASA Astrophysics Data System (ADS)
Na, Yueyue; Yu, Jian; Chai, Bianfang
2013-12-01
Independent vector analysis (IVA) is a recently proposed technique, an application of which is to solve the frequency domain blind source separation problem. Compared with the traditional complex-valued independent component analysis plus permutation correction approach, the largest advantage of IVA is that the permutation problem is directly addressed by IVA rather than resorting to the use of an ad hoc permutation resolving algorithm after a separation of the sources in multiple frequency bands. In this article, two updates for IVA are presented. First, a novel subband construction method is introduced, IVA will be conducted in subbands from high frequency to low frequency rather than in the full frequency band, the fact that the inter-frequency dependencies in subbands are stronger allows a more efficient approach to the permutation problem. Second, to improve robustness and against noise, the IVA nonlinearity is calculated only in the signal subspace, which is defined by the eigenvector associated with the largest eigenvalue of the signal correlation matrix. Different experiments were carried out on a software suite developed by us, and dramatic performance improvements were observed using the proposed methods. Lastly, as an example of real-world application, IVA with the proposed updates was used to separate vibration components from high-speed train noise data.
Steganalysis in high dimensions: fusing classifiers built on random subspaces
NASA Astrophysics Data System (ADS)
Kodovský, Jan; Fridrich, Jessica
2011-02-01
By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art.
Supervised orthogonal discriminant subspace projects learning for face recognition.
Chen, Yu; Xu, Xiao-Hong
2014-02-01
In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP.
A Multifaceted Independent Performance Analysis of Facial Subspace Recognition Algorithms
Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan
2013-01-01
Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)2PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration. PMID:23451054
The divergent roles of autophagy in ischemia and preconditioning
Sheng, Rui; Qin, Zheng-hong
2015-01-01
Autophagy is an evolutionarily conserved and lysosome-dependent process for degrading and recycling cellular constituents. Autophagy is activated following an ischemic insult or preconditioning, but it may exert dual roles in cell death or survival during these two processes. Preconditioning or lethal ischemia may trigger autophagy via multiple signaling pathways involving endoplasmic reticulum (ER) stress, AMPK/TSC/mTOR, Beclin 1/BNIP3/SPK2, and FoxO/NF-κB transcription factors, etc. Autophagy then interacts with apoptotic and necrotic signaling pathways to regulate cell death. Autophagy may also maintain cell function by removing protein aggregates or damaged mitochondria. To date, the dual roles of autophagy in ischemia and preconditioning have not been fully clarified. The purpose of the present review is to summarize the recent progress in the mechanisms underlying autophagy activation during ischemia and preconditioning. A better understanding of the dual effects of autophagy in ischemia and preconditioning could help to develop new strategies for the preventive treatment of ischemia. PMID:25832421
Xenon preconditioning reduces brain damage from neonatal asphyxia in rats.
Ma, Daqing; Hossain, Mahmuda; Pettet, Garry K J; Luo, Yan; Lim, Ta; Akimov, Stanislav; Sanders, Robert D; Franks, Nicholas P; Maze, Mervyn
2006-02-01
Xenon attenuates on-going neuronal injury in both in vitro and in vivo models of hypoxic-ischaemic injury when administered during and after the insult. In the present study, we sought to investigate whether the neuroprotective efficacy of xenon can be observed when administered before an insult, referred to as 'preconditioning'. In a neuronal-glial cell coculture, preexposure to xenon for 2 h caused a concentration-dependent reduction of lactate dehydrogenase release from cells deprived of oxygen and glucose 24 h later; xenon's preconditioning effect was abolished by cycloheximide, a protein synthesis inhibitor. Preconditioning with xenon decreased propidium iodide staining in a hippocampal slice culture model subjected to oxygen and glucose deprivation. In an in vivo model of neonatal asphyxia involving hypoxic-ischaemic injury to 7-day-old rats, preconditioning with xenon reduced infarction size when assessed 7 days after injury. Furthermore, a sustained improvement in neurologic function was also evident 30 days after injury. Phosphorylated cAMP (cyclic adenosine 3',5'-monophosphate)-response element binding protein (pCREB) was increased by xenon exposure. Also, the prosurvival proteins Bcl-2 and brain-derived neurotrophic factor were upregulated by xenon treatment. These studies provide evidence for xenon's preconditioning effect, which might be caused by a pCREB-regulated synthesis of proteins that promote survival against neuronal injury.
Implementation of Preconditioned Dual-Time Procedures in OVERFLOW
NASA Technical Reports Server (NTRS)
Pandya, Shishir A.; Venkateswaran, Sankaran; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2003-01-01
Preconditioning methods have become the method of choice for the solution of flowfields involving the simultaneous presence of low Mach and transonic regions. It is well known that these methods are important for insuring accurate numerical discretization as well as convergence efficiency over various operating conditions such as low Mach number, low Reynolds number and high Strouhal numbers. For unsteady problems, the preconditioning is introduced within a dual-time framework wherein the physical time-derivatives are used to march the unsteady equations and the preconditioned time-derivatives are used for purposes of numerical discretization and iterative solution. In this paper, we describe the implementation of the preconditioned dual-time methodology in the OVERFLOW code. To demonstrate the performance of the method, we employ both simple and practical unsteady flowfields, including vortex propagation in a low Mach number flow, flowfield of an impulsively started plate (Stokes' first problem) arid a cylindrical jet in a low Mach number crossflow with ground effect. All the results demonstrate that the preconditioning algorithm is responsible for improvements to both numerical accuracy and convergence efficiency and, thereby, enables low Mach number unsteady computations to be performed at a fraction of the cost of traditional time-marching methods.
Resveratrol exerts pharmacological preconditioning by activating PGC-1alpha.
Tan, Lan; Yu, Jin-Tai; Guan, Hua-Shi
2008-11-01
Resveratrol (RSV), a polyphenol phytoalexin abundantly found in grape skins and in wines, is currently the focus of intense research as a pharmacological preconditioning agent in kidney, heart, and brain from ischemic injury. However, the exact molecular mechanism of RSV preconditioning remains obscure. The data from current studies indicate that pharmacological preconditioning with RSV were attributed to its role as intracellular antioxidant, anti-inflammatory agent, its ability to induce nitric oxide synthase (NOS) expression, its ability to induce angiogenesis, and its ability to increases sirtuin 1 (SIRT1) activity. Peroxisome proliferators-activated receptor (PPAR) gamma co-activator-1alpha (PGC-1alpha) is a member of a family of transcription coactivators that owns mitochondrial biogenesis, antioxidation, growth factor signaling regulation, and angiogenesis activities. And, almost all the signaling pathways activated by RVS involve in PGC-1alpha activity. Moreover, it has been proofed that RVS could mediate an increase PGC-1alpha activity. These significant conditions support the hypothesis that RSV exerts pharmacological preconditioning by activating PGC-1alpha. Attempts to confirm this hypothesis will provide new directions in the study of pharmaceutical preconditioning and the development of new treatment approaches for reducing the extent of ischemia/reperfusion injury.
Heat shock proteins, end effectors of myocardium ischemic preconditioning?
Guisasola, María Concepcion; Desco, Maria del Mar; Gonzalez, Fernanda Silvana; Asensio, Fernando; Dulin, Elena; Suarez, Antonio; Garcia Barreno, Pedro
2006-01-01
The purpose of this study was to investigate (1) whether ischemia-reperfusion increased the content of heat shock protein 72 (Hsp72) transcripts and (2) whether myocardial content of Hsp72 is increased by ischemic preconditioning so that they can be considered as end effectors of preconditioning. Twelve male minipigs (8 protocol, 4 sham) were used, with the following ischemic preconditioning protocol: 3 ischemia and reperfusion 5-minute alternative cycles and last reperfusion cycle of 3 hours. Initial and final transmural biopsies (both in healthy and ischemic areas) were taken in all animals. Heat shock protein 72 messenger ribonucleic acid (mRNA) expression was measured by a semiquantitative reverse transcriptase-polymerase chain reaction (RT-PCR) method using complementary DNA normalized against the housekeeping gene cyclophilin. The identification of heat shock protein 72 was performed by immunoblot. In our “classic” preconditioning model, we found no changes in mRNA hsp72 levels or heat shock protein 72 content in the myocardium after 3 hours of reperfusion. Our experimental model is valid and the experimental techniques are appropriate, but the induction of heat shock proteins 72 as end effectors of cardioprotection in ischemic preconditioning does not occur in the first hours after ischemia, but probably at least 24 hours after it, in the so-called “second protection window.” PMID:17009598
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties
Sila, Andrew M.; Shepherd, Keith D.; Pokhariyal, Ganesh P.
2016-01-01
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky–Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries. PMID:27110048
Operator-Based Preconditioning of Stiff Hyperbolic Systems
Reynolds, Daniel R.; Samtaney, Ravi; Woodward, Carol S.
2009-02-09
We introduce an operator-based scheme for preconditioning stiff components encoun- tered in implicit methods for hyperbolic systems of partial differential equations posed on regular grids. The method is based on a directional splitting of the implicit operator, followed by a char- acteristic decomposition of the resulting directional parts. This approach allows for solution to any number of characteristic components, from the entire system to only the fastest, stiffness-inducing waves. We apply the preconditioning method to stiff hyperbolic systems arising in magnetohydro- dynamics and gas dynamics. We then present numerical results showing that this preconditioning scheme works well on problems where the underlying stiffness results from the interaction of fast transient waves with slowly-evolving dynamics, scales well to large problem sizes and numbers of processors, and allows for additional customization based on the specific problems under study.
Cortical spreading depression-induced preconditioning in the brain
Shen, Ping-ping; Hou, Shuai; Ma, Di; Zhao, Ming-ming; Zhu, Ming-qin; Zhang, Jing-dian; Feng, Liang-shu; Cui, Li; Feng, Jia-chun
2016-01-01
Cortical spreading depression is a technique used to depolarize neurons. During focal or global ischemia, cortical spreading depression-induced preconditioning can enhance tolerance of further injury. However, the underlying mechanism for this phenomenon remains relatively unclear. To date, numerous issues exist regarding the experimental model used to precondition the brain with cortical spreading depression, such as the administration route, concentration of potassium chloride, induction time, duration of the protection provided by the treatment, the regional distribution of the protective effect, and the types of neurons responsible for the greater tolerance. In this review, we focus on the mechanisms underlying cortical spreading depression-induced tolerance in the brain, considering excitatory neurotransmission and metabolism, nitric oxide, genomic reprogramming, inflammation, neurotropic factors, and cellular stress response. Specifically, we clarify the procedures and detailed information regarding cortical spreading depression-induced preconditioning and build a foundation for more comprehensive investigations in the field of neural regeneration and clinical application in the future. PMID:28123433
Gpu Implementation of Preconditioning Method for Low-Speed Flows
NASA Astrophysics Data System (ADS)
Zhang, Jiale; Chen, Hongquan
2016-06-01
An improved preconditioning method for low-Mach-number flows is implemented on a GPU platform. The improved preconditioning method employs the fluctuation of the fluid variables to weaken the influence of accuracy caused by the truncation error. The GPU parallel computing platform is implemented to accelerate the calculations. Both details concerning the improved preconditioning method and the GPU implementation technology are described in this paper. Then a set of typical low-speed flow cases are simulated for both validation and performance analysis of the resulting GPU solver. Numerical results show that dozens of times speedup relative to a serial CPU implementation can be achieved using a single GPU desktop platform, which demonstrates that the GPU desktop can serve as a cost-effective parallel computing platform to accelerate CFD simulations for low-Speed flows substantially.
Liquid hydrogen turbopump rapid start program. [thermal preconditioning using coatings
NASA Technical Reports Server (NTRS)
Wong, G. S.
1973-01-01
This program was to analyze, test, and evaluate methods of achieving rapid-start of a liquid hydrogen feed system (inlet duct and turbopump) using a minimum of thermal preconditioning time and propellant. The program was divided into four tasks. Task 1 includes analytical studies of the testing conducted in the other three tasks. Task 2 describes the results from laboratory testing of coating samples and the successful adherence of a KX-635 coating to the internal surfaces of the feed system tested in Task 4. Task 3 presents results of testing an uncoated feed system. Tank pressure was varied to determine the effect of flowrate on preconditioning. The discharge volume and the discharge pressure which initiates opening of the discharge valve were varied to determine the effect on deadhead (no through-flow) start transients. Task 4 describes results of testing a similar, internally coated feed system and illustrates the savings in preconditioning time and propellant resulting from the coatings.
Preconditioning boosts regenerative programmes in the adult zebrafish heart
de Preux Charles, Anne-Sophie; Bise, Thomas; Baier, Felix; Sallin, Pauline; Jaźwińska, Anna
2016-01-01
During preconditioning, exposure to a non-lethal harmful stimulus triggers a body-wide increase of survival and pro-regenerative programmes that enable the organism to better withstand the deleterious effects of subsequent injuries. This phenomenon has first been described in the mammalian heart, where it leads to a reduction of infarct size and limits the dysfunction of the injured organ. Despite its important clinical outcome, the actual mechanisms underlying preconditioning-induced cardioprotection remain unclear. Here, we describe two independent models of cardiac preconditioning in the adult zebrafish. As noxious stimuli, we used either a thoracotomy procedure or an induction of sterile inflammation by intraperitoneal injection of immunogenic particles. Similar to mammalian preconditioning, the zebrafish heart displayed increased expression of cardioprotective genes in response to these stimuli. As zebrafish cardiomyocytes have an endogenous proliferative capacity, preconditioning further elevated the re-entry into the cell cycle in the intact heart. This enhanced cycling activity led to a long-term modification of the myocardium architecture. Importantly, the protected phenotype brought beneficial effects for heart regeneration within one week after cryoinjury, such as a more effective cell-cycle reentry, enhanced reactivation of embryonic gene expression at the injury border, and improved cell survival shortly after injury. This study reveals that exposure to antecedent stimuli induces adaptive responses that render the fish more efficient in the activation of the regenerative programmes following heart damage. Our results open a new field of research by providing the adult zebrafish as a model system to study remote cardiac preconditioning. PMID:27440423
Fourier analysis of finite element preconditioned collocation schemes
NASA Technical Reports Server (NTRS)
Deville, Michel O.; Mund, Ernest H.
1990-01-01
The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.
Hyperbaric oxygen preconditioning: a reliable option for neuroprotection
Hu, Qin; Manaenko, Anatol; Matei, Nathanael; Guo, Zhenni; Xu, Ting; Tang, Jiping; Zhang, John H.
2016-01-01
Brain injury is the leading cause of death and disability worldwide and clinically there is no effective therapy for neuroprotection. Hyperbaric oxygen preconditioning (HBO-PC) has been experimentally demonstrated to be neuroprotective in several models and has shown efficiency in patients undergoing on-pump coronary artery bypass graft (CABG) surgery. Compared with other preconditioning stimuli, HBO is benign and has clinically translational potential. In this review, we will summarize the results in experimental brain injury and clinical studies, elaborate the mechanisms of HBO-PC, and discuss regimes and opinions for future interventions in acute brain injury. PMID:27826420
Incomplete block SSOR preconditionings for high order discretizations
Kolotilina, L.
1994-12-31
This paper considers the solution of linear algebraic systems Ax = b resulting from the p-version of the Finite Element Method (FEM) using PCG iterations. Contrary to the h-version, the p-version ensures the desired accuracy of a discretization not by refining an original finite element mesh but by introducing higher degree polynomials as additional basis functions which permits to reduce the size of the resulting linear system as compared with the h-version. The suggested preconditionings are the so-called Incomplete Block SSOR (IBSSOR) preconditionings.
Choice of Variables and Preconditioning for Time Dependent Problems
NASA Technical Reports Server (NTRS)
Turkel, Eli; Vatsa, Verr N.
2003-01-01
We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.
Unification of algorithms for minimum mode optimization
NASA Astrophysics Data System (ADS)
Zeng, Yi; Xiao, Penghao; Henkelman, Graeme
2014-01-01
Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.
Unification of algorithms for minimum mode optimization.
Zeng, Yi; Xiao, Penghao; Henkelman, Graeme
2014-01-28
Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have been proposed for this purpose. Each of these methods finds the lowest curvature mode iteratively without calculating the Hessian matrix, since the full matrix calculation is prohibitively expensive in the high dimensional spaces of interest. Here we unify these iterative methods in the same theoretical framework using the concept of the Krylov subspace. The Lanczos method finds the lowest eigenvalue in a Krylov subspace of increasing size, while the other methods search in a smaller subspace spanned by the set of previous search directions. We show that these smaller subspaces are contained within the Krylov space for which the Lanczos method explicitly finds the lowest curvature mode, and hence the theoretical efficiency of the minimum mode finding methods are bounded by the Lanczos method. Numerical tests demonstrate that the dimer method combined with second-order optimizers approaches but does not exceed the efficiency of the Lanczos method for minimum mode optimization.
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Parks, Geoffrey T.; Chen, Xiaoqian; Seshadri, Pranay
2016-03-01
Uncertainty quantification has recently been receiving much attention from aerospace engineering community. With ever-increasing requirements for robustness and reliability, it is crucial to quantify multidisciplinary uncertainty in satellite system design which dominates overall design direction and cost. However, coupled multi-disciplines and cross propagation hamper the efficiency and accuracy of high-dimensional uncertainty analysis. In this study, an uncertainty quantification methodology based on active subspaces is established for satellite conceptual design. The active subspace effectively reduces the dimension and measures the contributions of input uncertainties. A comprehensive characterization of associated uncertain factors is made and all subsystem models are built for uncertainty propagation. By integrating a system decoupling strategy, the multidisciplinary uncertainty effect is efficiently represented by a one-dimensional active subspace for each design. The identified active subspace is checked by bootstrap resampling for confidence intervals and verified by Monte Carlo propagation for the accuracy. To show the performance of active subspaces, 18 uncertainty parameters of an Earth observation small satellite are exemplified and then another 5 design uncertainties are incorporated. The uncertainties that contribute the most to satellite mass and total cost are ranked, and the quantification of high-dimensional uncertainty is achieved by a relatively small number of support samples. The methodology with considerably less cost exhibits high accuracy and strong adaptability, which provides a potential template to tackle multidisciplinary uncertainty in practical satellite systems.
A real-time cardiac surface tracking system using Subspace Clustering.
Singh, Vimal; Tewfik, Ahmed H; Gowreesunker, B
2010-01-01
Catheter based radio frequency ablation of atrial fibrillation requires real-time 3D tracking of cardiac surfaces with sub-millimeter accuracy. To best of our knowledge, there are no commercial or non-commercial systems capable to do so. In this paper, a system for high-accuracy 3D tracking of cardiac surfaces in real-time is proposed and results applied to a real patient dataset are presented. Proposed system uses Subspace Clustering algorithm to identify the potential deformation subspaces for cardiac surfaces during the training phase from pre-operative MRI scan based training set. In Tracking phase, using low-density outer cardiac surface samples, active deformation subspace is identified and complete inner & outer cardiac surfaces are reconstructed in real-time under a least squares formulation.
Estimation of direction of arrival of a moving target using subspace based approaches
NASA Astrophysics Data System (ADS)
Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2016-05-01
In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.
Crevecoeur, Guillaume; Yitembe, Bertrand; Dupre, Luc; Van Keer, Roger
2013-01-01
This paper proposes a modification of the subspace correlation cost function and the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) method for electroencephalography (EEG) source analysis in epilepsy. This enables to reconstruct neural source locations and orientations that are less degraded due to the uncertain knowledge of the head conductivity values. An extended linear forward model is used in the subspace correlation cost function that incorporates the sensitivity of the EEG potentials to the uncertain conductivity value parameter. More specifically, the principal vector of the subspace correlation function is used to provide relevant information for solving the EEG inverse problems. A simulation study is carried out on a simplified spherical head model with uncertain skull to soft tissue conductivity ratio. Results show an improvement in the reconstruction accuracy of source parameters compared to traditional methodology, when using conductivity ratio values that are different from the actual conductivity ratio.
40 CFR 86.232-94 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the start of the driving cycle except when vehicle peconditioning is performed in accordance with... dynamometer and operated through one UDDS cycle. (5) For those unusual circumstances where additional... additional preconditioning shall consist of one or more driving cycles of the UDDS, as described in...
40 CFR 86.232-94 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the start of the driving cycle except when vehicle peconditioning is performed in accordance with... dynamometer and operated through one UDDS cycle. (5) For those unusual circumstances where additional... additional preconditioning shall consist of one or more driving cycles of the UDDS, as described in...
40 CFR 86.232-94 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the start of the driving cycle except when vehicle peconditioning is performed in accordance with... dynamometer and operated through one UDDS cycle. (5) For those unusual circumstances where additional... additional preconditioning shall consist of one or more driving cycles of the UDDS, as described in...
Preconditioning with cobalt chloride modifies pain perception in mice.
Alexa, Teodora; Luca, Andrei; Dondas, Andrei; Bohotin, Catalina Roxana
2015-04-01
Cobalt chloride (CoCl2) modifies mitochondrial permeability and has a hypoxic-mimetic effect; thus, the compound induces tolerance to ischemia and increases resistance to a number of injury types. The aim of the present study was to investigate the effects of CoCl2 hypoxic preconditioning for three weeks on thermonociception, somatic and visceral inflammatory pain, locomotor activity and coordination in mice. A significant pronociceptive effect was observed in the hot plate and tail flick tests after one and two weeks of CoCl2 administration, respectively (P<0.001). Thermal hyperalgesia (Plantar test) was present in the first week, but recovered by the end of the experiment. Contrary to the hyperalgesic effect on thermonociception, CoCl2 hypoxic preconditioning decreased the time spent grooming the affected area in the second phase of the formalin test on the orofacial and paw models. The first phase of formalin-induced pain and the writhing test were not affected by CoCl2 preconditioning. Thus, the present study demonstrated that CoCl2 preconditioning has a dual effect on pain, and these effects should be taken into account along with the better-known neuro-, cardio- and renoprotective effects of CoCl2.
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Vehicle preparation and...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Vehicle Preparation and Running a Test § 1066.407 Vehicle preparation and preconditioning. This section describes steps to take before measuring...
40 CFR 1066.407 - Vehicle preparation and preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Vehicle preparation and...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Vehicle Preparation and Running a Test § 1066.407 Vehicle preparation and preconditioning. This section describes steps to take before measuring...
Improvement in computational fluid dynamics through boundary verification and preconditioning
NASA Astrophysics Data System (ADS)
Folkner, David E.
This thesis provides improvements to computational fluid dynamics accuracy and efficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary conditions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation boundary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional preconditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... be keel down in the water. (g) The boat must be swamped, allowing water to flow between the inside... flooded portion of the boat must be eliminated. (h) Water must flood the two largest air chambers and...
33 CFR 183.220 - Preconditioning for tests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Engines of More Than 2 Horsepower General § 183.220 Preconditioning for tests. A boat must meet the... be keel down in the water. (g) The boat must be swamped, allowing water to flow between the inside... flooded portion of the boat must be eliminated. (h) Water must flood the two largest air chambers and...
40 CFR 86.232-94 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.232-94... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1994 and Later Model Year Gasoline-Fueled New Light-Duty Vehicles, New Light-Duty Trucks and New...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.132-96 - Vehicle preconditioning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Vehicle preconditioning. 86.132-96... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1977 and Later Model Year New Light-Duty Vehicles and New Light-Duty Trucks and New Otto-Cycle...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
40 CFR 86.1774-99 - Vehicle preconditioning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Vehicle preconditioning. 86.1774-99... (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Provisions for the Voluntary National Low Emission Vehicle Program for Light-Duty Vehicles and...
Time-derivative preconditioning method for multicomponent flow
NASA Astrophysics Data System (ADS)
Housman, Jeffrey Allen
A time-derivative preconditioned system of equations suitable for the numerical simulation of single component and multicomponent inviscid flows at all speeds is formulated. The system is shown to be hyperbolic in time and remain well-posed at low Mach numbers, allowing an efficient time marching solution strategy to be utilized from transonic to incompressible flow speeds. For multicomponent flow at low speed, a preconditioned nonconservative discretization scheme is described which preserves pressure and velocity equilibrium across fluid interfaces, handles sharp liquid/gas interfaces with large density ratios, while remaining well-conditioned for time marching methods. The method is then extended to transonic and supersonic flows using a hybrid conservative/nonconservative formulation which retains the pressure/velocity equilibrium property and converges to the correct weak solution when shocks are present. In order to apply the proposed model to complex flow applications, the overset grid methodology is used where the equations are transformed to a nonorthogonal curvilinear coordinate system and discretized on structured body-fitted curvilinear grids. The multicomponent model and its extension to homogeneous multiphase mixtures is discussed and the hyperbolicity of the governing equations is demonstrated. Low Mach number perturbation analysis is then performed on the system of equations and a local time-derivative preconditioning matrix is derived allowing time marching numerical methods to remain efficient at low speeds. Next, a particular time marching numerical method is presented along with three discretization schemes for the convective terms. These include a conservative preconditioned Roe type method, a nonconservative preconditioned Split Coefficient Matrix (SCM) method, and hybrid formulation which combines the conservative and nonconservative schemes using a simple switching function. A characteristic boundary treatment which includes time
The Subspace Projected Approximate Matrix (SPAM) modification of the Davidson method
Shepard, R.; Tilson, J.L.; Wagner, A.F.; Minkoff, M.
1997-12-31
A modification of the Davidson subspace expansion method, a Ritz approach, is proposed in which the expansion vectors are computed from a {open_quotes}cheap{close_quotes} approximating eigenvalue equation. This approximate eigenvalue equation is assembled using projection operators constructed from the subspace expansion vectors. The method may be implemented using an inner/outer iteration scheme, or it may be implemented by modifying the usual Davidson algorithm in such a way that exact and approximate matrix-vector product computations are intersperced. A multi-level algorithm is proposed in which several levels of approximate matrices are used.
Review of Hydraulic Fracturing for Preconditioning in Cave Mining
NASA Astrophysics Data System (ADS)
He, Q.; Suorineni, F. T.; Oh, J.
2016-12-01
Hydraulic fracturing has been used in cave mining for preconditioning the orebody following its successful application in the oil and gas industries. In this paper, the state of the art of hydraulic fracturing as a preconditioning method in cave mining is presented. Procedures are provided on how to implement prescribed hydraulic fracturing by which effective preconditioning can be realized in any in situ stress condition. Preconditioning is effective in cave mining when an additional fracture set is introduced into the rock mass. Previous studies on cave mining hydraulic fracturing focused on field applications, hydraulic fracture growth measurement and the interaction between hydraulic fractures and natural fractures. The review in this paper reveals that the orientation of the current cave mining hydraulic fractures is dictated by and is perpendicular to the minimum in situ stress orientation. In some geotechnical conditions, these orientation-uncontrollable hydraulic fractures have limited preconditioning efficiency because they do not necessarily result in reduced fragmentation sizes and a blocky orebody through the introduction of an additional fracture set. This implies that if the minimum in situ stress orientation is vertical and favors the creation of horizontal hydraulic fractures, in a rock mass that is already dominated by horizontal joints, no additional fracture set is added to that rock mass to increase its blockiness to enable it cave. Therefore, two approaches that have the potential to create orientation-controllable hydraulic fractures in cave mining with the potential to introduce additional fracture set as desired are proposed to fill this gap. These approaches take advantage of directional hydraulic fracturing and the stress shadow effect, which can re-orientate the hydraulic fracture propagation trajectory against its theoretical predicted direction. Proppants are suggested to be introduced into the cave mining industry to enhance the
Strategies for study of neuroprotection from cold-preconditioning.
Mitchell, Heidi M; White, David M; Kraig, Richard P
2010-09-02
Neurological injury is a frequent cause of morbidity and mortality from general anesthesia and related surgical procedures that could be alleviated by development of effective, easy to administer and safe preconditioning treatments. We seek to define the neural immune signaling responsible for cold-preconditioning as means to identify novel targets for therapeutics development to protect brain before injury onset. Low-level pro-inflammatory mediator signaling changes over time are essential for cold-preconditioning neuroprotection. This signaling is consistent with the basic tenets of physiological conditioning hormesis, which require that irritative stimuli reach a threshold magnitude with sufficient time for adaptation to the stimuli for protection to become evident. Accordingly, delineation of the immune signaling involved in cold-preconditioning neuroprotection requires that biological systems and experimental manipulations plus technical capacities are highly reproducible and sensitive. Our approach is to use hippocampal slice cultures as an in vitro model that closely reflects their in vivo counterparts with multi-synaptic neural networks influenced by mature and quiescent macroglia/microglia. This glial state is particularly important for microglia since they are the principal source of cytokines, which are operative in the femtomolar range. Also, slice cultures can be maintained in vitro for several weeks, which is sufficient time to evoke activating stimuli and assess adaptive responses. Finally, environmental conditions can be accurately controlled using slice cultures so that cytokine signaling of cold-preconditioning can be measured, mimicked, and modulated to dissect the critical node aspects. Cytokine signaling system analyses require the use of sensitive and reproducible multiplexed techniques. We use quantitative PCR for TNF-α to screen for microglial activation followed by quantitative real-time qPCR array screening to assess tissue-wide cytokine
Strategies for Study of Neuroprotection from Cold-preconditioning
Mitchell, Heidi M.; White, David M.; Kraig, Richard P.
2010-01-01
Neurological injury is a frequent cause of morbidity and mortality from general anesthesia and related surgical procedures that could be alleviated by development of effective, easy to administer and safe preconditioning treatments. We seek to define the neural immune signaling responsible for cold-preconditioning as means to identify novel targets for therapeutics development to protect brain before injury onset. Low-level pro-inflammatory mediator signaling changes over time are essential for cold-preconditioning neuroprotection. This signaling is consistent with the basic tenets of physiological conditioning hormesis, which require that irritative stimuli reach a threshold magnitude with sufficient time for adaptation to the stimuli for protection to become evident. Accordingly, delineation of the immune signaling involved in cold-preconditioning neuroprotection requires that biological systems and experimental manipulations plus technical capacities are highly reproducible and sensitive. Our approach is to use hippocampal slice cultures as an in vitro model that closely reflects their in vivo counterparts with multi-synaptic neural networks influenced by mature and quiescent macroglia / microglia. This glial state is particularly important for microglia since they are the principal source of cytokines, which are operative in the femtomolar range. Also, slice cultures can be maintained in vitro for several weeks, which is sufficient time to evoke activating stimuli and assess adaptive responses. Finally, environmental conditions can be accurately controlled using slice cultures so that cytokine signaling of cold-preconditioning can be measured, mimicked, and modulated to dissect the critical node aspects. Cytokine signaling system analyses require the use of sensitive and reproducible multiplexed techniques. We use quantitative PCR for TNF-α to screen for microglial activation followed by quantitative real-time qPCR array screening to assess tissue
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
Cai, Xianfa; Wei, Jia; Wen, Guihua; Yu, Zhiwen
2014-03-01
Precise cancer classification is essential to the successful diagnosis and treatment of cancers. Although semisupervised dimensionality reduction approaches perform very well on clean datasets, the topology of the neighborhood constructed with most existing approaches is unstable in the presence of high-dimensional data with noise. In order to solve this problem, a novel local and global preserving semisupervised dimensionality reduction based on random subspace algorithm marked as RSLGSSDR, which utilizes random subspace for semisupervised dimensionality reduction, is proposed. The algorithm first designs multiple diverse graphs on different random subspace of datasets and then fuses these graphs into a mixture graph on which dimensionality reduction is performed. As themixture graph is constructed in lower dimensionality, it can ease the issues on graph construction on highdimensional samples such that it can hold complicated geometric distribution of datasets as the diversity of random subspaces. Experimental results on public gene expression datasets demonstrate that the proposed RSLGSSDR not only has superior recognition performance to competitive methods, but also is robust against a wide range of values of input parameters.
Large-margin predictive latent subspace learning for multiview data analysis.
Chen, Ning; Zhu, Jun; Sun, Fuchun; Xing, Eric Poe
2012-12-01
Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.
Detecting and characterizing coal mine related seismicity in the Western U.S. using subspace methods
NASA Astrophysics Data System (ADS)
Chambers, Derrick J. A.; Koper, Keith D.; Pankow, Kristine L.; McCarter, Michael K.
2015-11-01
We present an approach for subspace detection of small seismic events that includes methods for estimating magnitudes and associating detections from multiple stations into unique events. The process is used to identify mining related seismicity from a surface coal mine and an underground coal mining district, both located in the Western U.S. Using a blasting log and a locally derived seismic catalogue as ground truth, we assess detector performance in terms of verified detections, false positives and failed detections. We are able to correctly identify over 95 per cent of the surface coal mine blasts and about 33 per cent of the events from the underground mining district, while keeping the number of potential false positives relatively low by requiring all detections to occur on two stations. We find that most of the potential false detections for the underground coal district are genuine events missed by the local seismic network, demonstrating the usefulness of regional subspace detectors in augmenting local catalogues. We note a trade-off in detection performance between stations at smaller source-receiver distances, which have increased signal-to-noise ratio, and stations at larger distances, which have greater waveform similarity. We also explore the increased detection capabilities of a single higher dimension subspace detector, compared to multiple lower dimension detectors, in identifying events that can be described as linear combinations of training events. We find, in our data set, that such an advantage can be significant, justifying the use of a subspace detection scheme over conventional correlation methods.
Robust normal estimation of point cloud with sharp features via subspace clustering
NASA Astrophysics Data System (ADS)
Luo, Pei; Wu, Zhuangzhi; Xia, Chunhe; Feng, Lu; Jia, Bo
2014-01-01
Normal estimation is an essential step in point cloud based geometric processing, such as high quality point based rendering and surface reconstruction. In this paper, we present a clustering based method for normal estimation which preserves sharp features. For a piecewise smooth point cloud, the k-nearest neighbors of one point lie on a union of multiple subspaces. Given the PCA normals as input, we perform a subspace clustering algorithm to segment these subspaces. Normals are estimated by the points lying in the same subspace as the center point. In contrast to the previous method, we exploit the low-rankness of the input data, by seeking the lowest rank representation among all the candidates that can represent one normal as linear combinations of the others. Integration of Low-Rank Representation (LRR) makes our method robust to noise. Moreover, our method can simultaneously produce the estimated normals and the local structures which are especially useful for denoise and segmentation applications. The experimental results show that our approach successfully recovers sharp features and generates more reliable results compared with the state-of-theart.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Zhang Xinding; Zhang Qinghua; Wang, Z. D.
2006-09-15
We propose a feasible scheme to achieve holonomic quantum computation in a decoherence-free subspace (DFS) with trapped ions. By the application of appropriate bichromatic laser fields on the designated ions, we are able to construct two noncommutable single-qubit gates and one controlled-phase gate using the holonomic scenario in the encoded DFS.
An analogue of the Littlewood-Paley theorem for orthoprojectors onto wavelet subspaces
NASA Astrophysics Data System (ADS)
Kudryavtsev, S. N.
2016-06-01
We prove an analogue of the Littlewood-Paley theorem for orthoprojectors onto wavelet subspaces corresponding to a non-isotropic multiresolution analysis generated by the tensor product of smooth scaling functions of one variable with sufficiently rapid decay at infinity.
Hesse, Christian W
2007-01-01
Accurate estimates of the dimension and an (orthogonal) basis of the signal subspace of noise corrupted multi-channel measurements are essential for accurate identification and extraction of any signals of interest within that subspace. For most biomedical signals comprising very large numbers of channels, including the magnetoencephalogram (MEG), the "true" number of underlying signals ¿ although ultimately unknown ¿ is unlikely to be of the same order as the number of measurements, and has to be estimated from the available data. This work examines several second-order statistical approaches to signal subspace (dimension) estimation with respect to their underlying assumptions and their performance in high-dimensional measurement spaces using 151-channel MEG data. The purpose is to identify which of these methods might be most appropriate for modeling the signal subspace structure of high-density MEG data recorded under controlled conditions, and what are the practical consequences with regard to the subsequent application of biophysical modeling and statistical source separation techniques.
Using spectral subspaces to improve infrared spectroscopy prediction of soil properties
NASA Astrophysics Data System (ADS)
Sila, Andrew; Shepherd, Keith D.; Pokhariyal, Ganesh P.; Towett, Erick; Weullow, Elvis; Nyambura, Mercy K.
2015-04-01
We propose a method for improving soil property predictions using local calibration models trained on datasets in spectral subspaces rather that in a global space. Previous studies have shown that local calibrations based on a subset of spectra based on spectral similarity can improve model prediction performance where there is large population variance. Searching for relevant subspaces within a spectral collection to construct local models could result in models with high power and small prediction errors, but optimal methods for selection of local samples are not clear. Using a self-organizing map method (SOM) we obtained four mid-infrared subspaces for 1,907 soil sample spectra collected from 19 different countries by the Africa Soil Information Service. Subspace means for four sub-spaces and five selected soil properties were: pH, 6.0, 6.1, 6.0, 5.6; Mehlich-3 Al, 358, 974, 614, 1032 (mg/kg); Mehlich-3 Ca, 363, 1161, 526, 4276 (mg/kg); Total Carbon, 0.4, 1.1, 0.6, 2.3 (% by weight), and Clay (%), 16.8, 46.4, 27.7, 63.3. Spectral subspaces were also obtained using a cosine similarity method to calculate the angle between the entire sample spectra space and spectra of 10 pure soil minerals. We found the sample soil spectra to be similar to four pure minerals distributed as: Halloysite (n1=214), Illite (n2=743), Montmorillonite (n3=914) and Quartz (n4=32). Cross-validated partial least square regression models were developed using two-thirds of samples spectra from each subspace for the five soil properties.We evaluated prediction performance of the models using the root mean square error of prediction (RMSEP) for a one-third-holdout set. Local models significantly improved prediction performance compared with the global model. The SOM method reduced RMESP for total carbon by 10 % (global RMSEP = 0.41) Mehlich-3 Ca by 17% (global RMESP = 1880), Mehlich-3 Al by 21 % (global RMSEP = 206), and clay content by 6 % (global RMSEP = 13.6), but not for pH. Individual SOM
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite
Carroll, C M; Carroll, S M; Overgoor, M L; Tobin, G; Barker, J H
1997-07-01
Ischemic preconditioning of the myocardium with repeated brief periods of ischemia and reperfusion prior to prolonged ischemia significantly reduces subsequent myocardial infarction. Following ischemic preconditioning, two "windows of opportunity" (early and late) exist, during which time prolonged ischemia can occur with reduced infarction size. The early window occurs at approximately 4 hours and the late window at 24 hours following ischemic preconditioning of the myocardium. We investigated if ischemic preconditioning of skeletal muscle prior to flap creation improved subsequent flap survival and perfusion immediately or 24 hours following ischemic preconditioning. Currently, no data exist on the utilization of ischemic preconditioning in this fashion. The animal model used was the latissimus dorsi muscle of adult male Sprague-Dawley rats. Animals were assigned to three groups, and the right or left latissimus dorsi muscle was chosen randomly in each animal. Group 1 (n = 12) was the control group, in which the entire latissimus dorsi muscle was elevated acutely without ischemic preconditioning. Group 2 (n = 8) investigated the effects of ischemic preconditioning in the early window. In this group, the latissimus dorsi muscle was elevated immediately following preconditioning. Group 3 (n = 8) investigated the effects of ischemic preconditioning in the late window, with elevation of the latissimus dorsi muscle 24 hours following ischemic preconditioning. The preconditioning regimen used in groups 2 and 3 was two 30-minute episodes of normothermic global ischemia with intervening 10-minute episodes of reperfusion. Latissimus dorsi muscle ischemia was created by occlusion of the thoracodorsal artery and vein and the intercostal perforators, after isolation of the muscle on these vessels. Muscle perfusion was assessed by a laser-Doppler perfusion imager. One week after flap elevation, muscle necrosis was quantified in all groups by means of computer-assisted digital
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; Dinner, Aaron R.
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
Kale, Seyit; Sode, Olaseni; Weare, Jonathan; ...
2014-11-07
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum undermore » DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.« less
Molecular mechanisms of ischemic preconditioning in the kidney
Haase, Volker H.
2015-01-01
More effective therapeutic strategies for the prevention and treatment of acute kidney injury (AKI) are needed to improve the high morbidity and mortality associated with this frequently encountered clinical condition. Ischemic and/or hypoxic preconditioning attenuates susceptibility to ischemic injury, which results from both oxygen and nutrient deprivation and accounts for most cases of AKI. While multiple signaling pathways have been implicated in renoprotection, this review will focus on oxygen-regulated cellular and molecular responses that enhance the kidney's tolerance to ischemia and promote renal repair. Central mediators of cellular adaptation to hypoxia are hypoxia-inducible factors (HIFs). HIFs play a crucial role in ischemic/hypoxic preconditioning through the reprogramming of cellular energy metabolism, and by coordinating adenosine and nitric oxide signaling with antiapoptotic, oxidative stress, and immune responses. The therapeutic potential of HIF activation for the treatment and prevention of ischemic injuries will be critically examined in this review. PMID:26311114
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol
2015-01-01
Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726
HMC algorithm with multiple time scale integration and mass preconditioning
NASA Astrophysics Data System (ADS)
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Object-oriented design of preconditioned iterative methods
Bruaset, A.M.
1994-12-31
In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.
Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)
1998-01-01
This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
The evolving concept of physiological ischemia training vs. ischemia preconditioning.
Ni, Jun; Lu, Hongjian; Lu, Xiao; Jiang, Minghui; Peng, Qingyun; Ren, Caili; Xiang, Jie; Mei, Chengyao; Li, Jianan
2015-11-01
Ischemic heart diseases are the leading cause of death with increasing numbers of patients worldwide. Despite advances in revascularization techniques, angiogenic therapies remain highly attractive. Physiological ischemia training, which is first proposed in our laboratory, refers to reversible ischemia training of normal skeletal muscles by using a tourniquet or isometric contraction to cause physiologic ischemia for about 4 weeks for the sake of triggering molecular and cellular mechanisms to promote angiogenesis and formation of collateral vessels and protect remote ischemia areas. Physiological ischemia training therapy augments angiogenesis in the ischemic myocardium by inducing differential expression of proteins involved in energy metabolism, cell migration, protein folding, and generation. It upregulates the expressions of vascular endothelial growth factor, and induces angiogenesis, protects the myocardium when infarction occurs by increasing circulating endothelial progenitor cells and enhancing their migration, which is in accordance with physical training in heart disease rehabilitation. These findings may lead to a new approach of therapeutic angiogenesis for patients with ischemic heart diseases. On the basis of the promising results in animal studies, studies were also conducted in patients with coronary artery disease without any adverse effect in vivo, indicating that physiological ischemia training therapy is a safe, effective and non-invasive angiogenic approach for cardiovascular rehabilitation. Preconditioning is considered to be the most protective intervention against myocardial ischemia-reperfusion injury to date. Physiological ischemia training is different from preconditioning. This review summarizes the preclinical and clinical data of physiological ischemia training and its difference from preconditioning.
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
Islet preconditioning via multimodal microfluidic modulation of intermittent hypoxia.
Lo, Joe F; Wang, Yong; Blake, Alexander; Yu, Gene; Harvat, Tricia A; Jeon, Hyojin; Oberholzer, Jose; Eddington, David T
2012-02-21
Simultaneous stimulation of ex vivo pancreatic islets with dynamic oxygen and glucose is a critical technique for studying how hypoxia alters glucose-stimulated response, especially in transplant environments. Standard techniques using a hypoxic chamber cannot provide both oxygen and glucose modulations, while monitoring stimulus-secretion coupling factors in real-time. Using novel microfluidic device with integrated glucose and oxygen modulations, we quantified hypoxic impairment of islet response by calcium influx, mitochondrial potentials, and insulin secretion. Glucose-induced calcium response magnitude and phase were suppressed by hypoxia, while mitochondrial hyperpolarization and insulin secretion decreased in coordination. More importantly, hypoxic response was improved by preconditioning islets to intermittent hypoxia (IH, 1 min/1 min 5-21% cycling for 1 h), translating to improved insulin secretion. Moreover, blocking mitochondrial K(ATP) channels removed preconditioning benefits of IH, similar to mechanisms in preconditioned cardiomyocytes. Additionally, the multimodal device can be applied to a variety of dynamic oxygen-metabolic studies in other ex vivo tissues.
Thermal Preconditioning of MIMS Type K Thermocouples to Reduce Drift
NASA Astrophysics Data System (ADS)
Webster, E. S.
2017-01-01
Type K thermocouples are the most widely used temperature sensors in industry and are often used in the convenient mineral-insulated metal-sheathed (MIMS) format. The MIMS format provides almost total immunity to oxide-related drift in the 800°C-1000°C range. However, crystalline ordering of the atomic structure causes drift in the range 200°C-600°C. Troublesomely, the effects of this ordering are reversible, leading to hysteresis in some applications. Typically, MIMS cable is subjected to a post-manufacturing high-temperature recrystallization anneal to remove cold-work and place the thermocouple in a `known state.' However, variations in the temperatures and times of these exposures can lead to variations in the `as-received state.' This study gives guidelines on the best thermal preconditioning of 3 mm MIMS Type K thermocouples in order to minimize drift and achieve the most reproducible temperature measurements. Experimental results demonstrate the consequences of using Type K MIMS thermocouples in different states, including the as-received state, after a high-temperature recrystallization anneal and after preconditioning anneals at 200°C, 300°, 400°C and and 500°C. It is also shown that meaningful calibration is possible with the use of regular preconditioning anneals.
Financial preconditions for successful community initiatives for the uninsured.
Song, Paula H; Smith, Dean G
2007-01-01
Community-based initiatives are increasingly being implemented as a strategy to address the health needs of the community, with a growing body of evidence on successes of various initiatives. This study addresses financial status indicators (preconditions) that might predict where community-based initiatives might have a better chance for success. We evaluated five community-based initiatives funded by the Communities in Charge (CIC) program sponsored by the Robert Wood Johnson Foundation. These initiatives focus on increasing access by easing financial barriers to care for the uninsured. At each site, we collected information on financial status indicators and interviewed key personnel from health services delivery and financing organizations. With full acknowledgment of the caveats associated with generalizations based on a small number of observations, we suggest four financial preconditions associated with successful initiation of CIC programs: (1) uncompensated care levels that negatively affect profitability, (2) reasonable financial stability of providers, (3) stable health insurance market, and (4) the potential to create new sources of funding. In general, sites that demonstrate successful program initiation are financially stressed enough by uncompensated care to gain the attention of local healthcare providers. However, they are not so strained and so concerned about revenue sources that they cannot afford to participate in the initiative. In addition to political and managerial indicators, we suggest that planning for community-based initiatives should include financial indicators of current health services delivery and financing organizations and consideration of whether they meet preconditions for success.
Preconditioned iterative methods for space-time fractional advection-diffusion equations
NASA Astrophysics Data System (ADS)
Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.
2016-08-01
In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.
Wakai, Takuma; Narasimhan, Purnima; Sakata, Hiroyuki; Wang, Eric; Yoshioka, Hideyuki; Kinouchi, Hiroyuki; Chan, Pak H
2016-12-01
Previous studies have shown that intraparenchymal transplantation of neural stem cells ameliorates neurological deficits in animals with intracerebral hemorrhage. However, hemoglobin in the host brain environment causes massive grafted cell death and reduces the effectiveness of this approach. Several studies have shown that preconditioning induced by sublethal hypoxia can markedly improve the tolerance of treated subjects to more severe insults. Therefore, we investigated whether hypoxic preconditioning enhances neural stem cell resilience to the hemorrhagic stroke environment and improves therapeutic effects in mice. To assess whether hypoxic preconditioning enhances neural stem cell survival when exposed to hemoglobin, neural stem cells were exposed to 5% hypoxia for 24 hours before exposure to hemoglobin. To study the effectiveness of hypoxic preconditioning on grafted-neural stem cell recovery, neural stem cells subjected to hypoxic preconditioning were grafted into the parenchyma 3 days after intracerebral hemorrhage. Hypoxic preconditioning significantly enhanced viability of the neural stem cells exposed to hemoglobin and increased grafted-cell survival in the intracerebral hemorrhage brain. Hypoxic preconditioning also increased neural stem cell secretion of vascular endothelial growth factor. Finally, transplanted neural stem cells with hypoxic preconditioning exhibited enhanced tissue-protective capability that accelerated behavioral recovery. Our results suggest that hypoxic preconditioning in neural stem cells improves efficacy of stem cell therapy for intracerebral hemorrhage.
Entanglement properties of positive operators with ranges in completely entangled subspaces
NASA Astrophysics Data System (ADS)
Sengupta, R.; Arvind, Singh, Ajit Iqbal
2014-12-01
We prove that the projection on a completely entangled subspace S of maximum dimension obtained by Parthasarathy [K. R. Parthasarathy, Proc. Indian Acad. Sci. Math. Sci. 114, 365 (2004), 10.1007/BF02829441] in a multipartite quantum system is not positive under partial transpose. We next show that a large number of positive operators with a range in S also have the same property. In this process we construct an orthonormal basis for S and provide a theorem to link the constructions of completely entangled subspaces due to Parthasarathy (as cited above), Bhat [B. V. R. Bhat, Int. J. Quantum Inf. 4, 325 (2006), 10.1142/S0219749906001797], and Johnston [N. Johnston, Phys. Rev. A 87, 064302 (2013), 10.1103/PhysRevA.87.064302].
NASA Astrophysics Data System (ADS)
Zhang, Yungang; Zhang, Bailing; Lu, Wenjin
2011-06-01
Histological image is important for diagnosis of breast cancer. In this paper, we present a novel automatic breaset cancer classification scheme based on histological images. The image features are extracted using the Curvelet Transform, statistics of Gray Level Co-occurence Matrix (GLCM) and Completed Local Binary Patterns (CLBP), respectively. The three different features are combined together and used for classification. A classifier ensemble approach, called Random Subspace Ensemble (RSE), are used to select and aggregate a set of base neural network classifiers for classification. The proposed multiple features and random subspace ensemble offer the classification rate 95.22% on a publically available breast cancer image dataset, which compares favorably with the previously published result 93.4%.
Active subspace approach to reliability and safety assessments of small satellite separation
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Chen, Xiaoqian; Zhao, Yong; Tuo, Zhouhui; Yao, Wen
2017-02-01
Ever-increasing launch of small satellites demands an effective and efficient computer-aided analysis approach to shorten the ground test cycle and save the economic cost. However, the multiple influencing factors hamper the efficiency and accuracy of separation reliability assessment. In this study, a novel evaluation approach based on active subspace identification and response surface construction is established and verified. The formulation of small satellite separation is firstly derived, including equations of motion, separation and gravity forces, and quantity of interest. The active subspace reduces the dimension of uncertain inputs with minimum precision loss and a 4th degree multivariate polynomial regression (MPR) using cross validation is hand-coded for the propagation and error analysis. A common spring separation of small satellites is employed to demonstrate the accuracy and efficiency of the approach, which exhibits its potential use in widely existing needs of satellite separation analysis.
Experimental creation of quantum Zeno subspaces by repeated multi-spin projections in diamond
Kalb, N.; Cramer, J.; Twitchen, D. J.; Markham, M.; Hanson, R.; Taminiau, T. H.
2016-01-01
Repeated observations inhibit the coherent evolution of quantum states through the quantum Zeno effect. In multi-qubit systems this effect provides opportunities to control complex quantum states. Here, we experimentally demonstrate that repeatedly projecting joint observables of multiple spins creates quantum Zeno subspaces and simultaneously suppresses the dephasing caused by a quasi-static environment. We encode up to two logical qubits in these subspaces and show that the enhancement of the dephasing time with increasing number of projections follows a scaling law that is independent of the number of spins involved. These results provide experimental insight into the interplay between frequent multi-spin measurements and slowly varying noise and pave the way for tailoring the dynamics of multi-qubit systems through repeated projections. PMID:27713397
Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao
2016-01-01
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234
NASA Astrophysics Data System (ADS)
Barnum, Howard; Ortiz, Gerardo; Somma, Rolando; Viola, Lorenza
2005-12-01
We define what it means for a state in a convex cone of states on a space of observables to be generalized-entangled relative to a subspace of the observables, in a general ordered linear spaces framework for operational theories. This extends the notion of ordinary entanglement in quantum information theory to a much more general framework. Some important special cases are described, in which the distinguished observables are subspaces of the observables of a quantum system, leading to results like the identification of generalized unentangled states with Lie-group-theoretic coherent states when the special observables form an irreducibly represented Lie algebra. Some open problems, including that of generalizing the semigroup of local operations with classical communication to the convex cones setting, are discussed.
Iterated Class-Specific Subspaces for Speaker-Dependent Phoneme Classification
2008-01-01
to represent this speaker/ phoneme combination. For the individual speaker experiments, we chose model order for each speaker/ phoneme combination in...separately trained a model on each speaker/ phoneme combination. In phoneme - class (PC) classifier training, we grouped all speakers of a given phoneme into a...single phoneme . When the subspace is limited, CSIS may be able to find a better statistical model of the distribu- tiuon. A second piece of evidence that
Subspace based adaptive denoising of surface EMG from neurological injury patients
NASA Astrophysics Data System (ADS)
Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping
2014-10-01
Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.
Transfer and teleportation of quantum states encoded in decoherence-free subspace
Wei Hua; Deng Zhijao; Zhang Xiaolong; Feng Mang
2007-11-15
Quantum state transfer and teleportation, with qubits encoded in internal states of atoms in cavities, among spatially separated nodes of a quantum network in a decoherence-free subspace are proposed, based on a cavity-assisted interaction with single-photon pulses. We show in detail the implementation of a logic-qubit Hadamard gate and a two-logic-qubit conditional gate, and discuss the experimental feasibility of our scheme.
2007-11-02
DESE : another decimative subspace-based parameter estimation algorithm, recently proposed as Decimative Spectral Estimation [3]. In what follows...Sponsoring/Monitoring Agency Name(s) and Address(es) US Army Research , Development & Standardization Group (UK) PSC 802 Box 15 FPO AE 09499-1500 Sponsor...1DXstackX∗stack. (12) C. DESE This algorithm was presented very recently [3]. Like HTLS, DESE also makes use of the SVD of a Hankel ma- trix and the
A repeatable inverse kinematics algorithm with linear invariant subspaces for mobile manipulators.
Tchoń, Krzysztof; Jakubiak, Janusz
2005-10-01
On the basis of a geometric characterization of repeatability we present a repeatable extended Jacobian inverse kinematics algorithm for mobile manipulators. The algorithm's dynamics have linear invariant subspaces in the configuration space. A standard Ritz approximation of platform controls results in a band-limited version of this algorithm. Computer simulations involving an RTR manipulator mounted on a kinematic car-type mobile platform are used in order to illustrate repeatability and performance of the algorithm.
NASA Astrophysics Data System (ADS)
Sahadevan, R.; Prakash, P.
2017-01-01
We show how invariant subspace method can be extended to time fractional coupled nonlinear partial differential equations and construct their exact solutions. Effectiveness of the method has been illustrated through time fractional Hunter-Saxton equation, time fractional coupled nonlinear diffusion system, time fractional coupled Boussinesq equation and time fractional Whitman-Broer-Kaup system. Also we explain how maximal dimension of the time fractional coupled nonlinear partial differential equations can be estimated.
Immunity of information encoded in decoherence-free subspaces to particle loss
NASA Astrophysics Data System (ADS)
Migdał, Piotr; Banaszek, Konrad
2011-11-01
We demonstrate that for an ensemble of qudits, subjected to collective decoherence in the form of perfectly correlated random SU(d) unitaries, quantum superpositions stored in the decoherence-free subspace are fully immune against the removal of one particle. This provides a feasible scheme to protect quantum information encoded in the polarization state of a sequence of photons against both collective depolarization and one-photon loss and can be demonstrated with photon quadruplets using currently available technology.
A study of the Invariant Subspace Decomposition Algorithm for banded symmetric matrices
Bischof, C.; Sun, X.; Tsao, A.; Turnbull, T.
1994-06-01
In this paper, we give an overview of the Invariant Subspace Decomposition Algorithm for banded symmetric matrices and describe a sequential implementation of this algorithm. Our implementation uses a specialized routine for performing banded matrix multiplication together with successive band reduction, yielding a sequential algorithm that is competitive for large problems with the LAPACK QR code in computing all of the eigenvalues and eigenvectors of a dense symmetric matrix. Performance results are given on a variety of machines.
A signal subspace approach for modeling the hemodynamic response function in fMRI.
Hossein-Zadeh, Gholam-Ali; Ardekani, Babak A; Soltanian-Zadeh, Hamid
2003-10-01
Many fMRI analysis methods use a model for the hemodynamic response function (HRF). Common models of the HRF, such as the Gaussian or Gamma functions, have parameters that are usually selected a priori by the data analyst. A new method is presented that characterizes the HRF over a wide range of parameters via three basis signals derived using principal component analysis (PCA). Covering the HRF variability, these three basis signals together with the stimulation pattern define signal subspaces which are applicable to both linear and nonlinear modeling and identification of the HRF and for various activation detection strategies. Analysis of simulated fMRI data using the proposed signal subspace showed increased detection sensitivity compared to the case of using a previously proposed trigonometric subspace. The methodology was also applied to activation detection in both event-related and block design experimental fMRI data using both linear and nonlinear modeling of the HRF. The activated regions were consistent with previous studies, indicating the ability of the proposed approach in detecting brain activation without a priori assumptions about the shape parameters of the HRF. The utility of the proposed basis functions in identifying the HRF is demonstrated by estimating the HRF in different activated regions.
N-Screen Aware Multicriteria Hybrid Recommender System Using Weight Based Subspace Clustering
Ullah, Farman; Lee, Sungchang
2014-01-01
This paper presents a recommender system for N-screen services in which users have multiple devices with different capabilities. In N-screen services, a user can use various devices in different locations and time and can change a device while the service is running. N-screen aware recommendation seeks to improve the user experience with recommended content by considering the user N-screen device attributes such as screen resolution, media codec, remaining battery time, and access network and the user temporal usage pattern information that are not considered in existing recommender systems. For N-screen aware recommendation support, this work introduces a user device profile collaboration agent, manager, and N-screen control server to acquire and manage the user N-screen devices profile. Furthermore, a multicriteria hybrid framework is suggested that incorporates the N-screen devices information with user preferences and demographics. In addition, we propose an individual feature and subspace weight based clustering (IFSWC) to assign different weights to each subspace and each feature within a subspace in the hybrid framework. The proposed system improves the accuracy, precision, scalability, sparsity, and cold start issues. The simulation results demonstrate the effectiveness and prove the aforementioned statements. PMID:25152921
Shahbazi Avarvand, Forooz; Ewald, Arne; Nolte, Guido
2012-01-01
To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method "RAP-MUSIC" to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas.
Hyperbaric Oxygen Preconditioning Provides Preliminary Protection Against Doxorubicin Cardiotoxicity
Tezcan, Orhan; Karahan, Oguz; Alan, Mustafa; Ekinci, Cenap; Yavuz, Celal; Demirtas, Sinan; Ekinci, Aysun; Caliskan, Ahmet
2017-01-01
Background Doxorubicin (DOX) is generally recognized to have important cardiotoxic side effects. Studies are contradictory about the interaction between hyperbaric oxygen (HBO2) therapy and doxorubicin-induced cardiomyotoxicity. Recent data suggests that HBO2 therapy can lead to preconditioning of myocardium while generating oxidative stress. Herein we have investigated the effect of HBO2 therapy in a DOX-induced cardiomyocyte injury animal model. Methods Twenty-one rats were divided into three equal groups as follows: 1) Group 1 is a control group (without any intervention), used for evaluating the basal cardiac structures and determining the normal value of cardiacs and serum oxidative markers; 2) Group 2 is the doxorubicin group (single dose i.p. 20 mg/kg doxorubicin) for detecting the cardiotoxic and systemic effects of doxorubicin; 3) Group 3 is the doxorubicin and HBO2 group (100% oxygen at 2.5 atmospheric for 90 minutes, daily), for evaluating the effect of HBO2 in doxorubicin induced cardiotoxicity. At the end of the protocols, the hearts were harvested and blood samples (2 ml) were obtained. Results The doxorubicin treated animals (Group 2) had increased oxidative stress markers (both cardiac and serum) and severe cardiac injury as compared to the basal findings in the control group. Nevertheless, the highest cardiac oxidative stress index was detected in Group 3 (control vs. Group 3, p = 0.01). However, histological examination revealed that cardiac structures were well preserved in Group 3 when compared with Group 2. Conclusions Our results suggest that HBO2 preconditioning appears to be protective in the doxorubicin-induced cardiotoxicity model. Future studies are required to better elucidate the basis of this preconditioning effect of HBO2. PMID:28344418
Steps to translate preconditioning from basic research to the clinic
Bahjat, Frances R; Gesuete, Raffaella; Stenzel-Poore, Mary P
2012-01-01
Efforts to treat cardiovascular and cerebrovascular diseases often focus on the mitigation of ischemia-reperfusion (I/R) injury. Many treatments or “preconditioners” are known to provide substantial protection against the I/R injury when administered prior to the event. Brief periods of ischemia itself have been validated as a means to achieve neuroprotection in many experimental disease settings, in multiple organ systems, and in multiple species suggesting a common pathway leading to tolerance. In addition, pharmacological agents that act as potent preconditioners have been described. Experimental induction of neuroprotection using these various preconditioning paradigms has provided a unique window into the brain’s endogenous protective mechanisms. Moreover, preconditioning agents themselves hold significant promise as clinical-stage therapies for prevention of I/R injury. The aim of this article is to explore several key steps involved in the preclinical validation of preconditioning agents prior to the conduct of clinical studies in humans. Drug development is difficult, expensive and relies on multi-factorial analysis of data from diverse disciplines. Importantly, there is no single path for the preclinical development of a novel therapeutic and no proven strategy to ensure success in clinical translation. Rather, the conduct of a diverse array of robust preclinical studies reduces the risk of clinical failure by varying degrees depending upon the relevance of preclinical models and drug pharmacology to humans. A strong sense of urgency and high tolerance of failure are often required to achieve success in the development of novel treatment paradigms for complex human conditions. PMID:23504609
A new search subspace to compensate failure of cavity-based localization of ligand-binding sites.
Singh, Kalpana; Lahiri, Tapobrata
2017-01-31
The common exercise adopted in almost all the ligand-binding sites (LBS) predictive methods is to considerably reduce the search space up to a meager fraction of the whole protein. In this exercise it is assumed that the LBS are mostly localized within a search subspace, cavities, which topologically appear to be valleys within a protein surface. Therefore, extraction of cavities is considered as a most important preprocessing step for finally predicting LBS. However, prediction of LBS based on cavity search subspace is found to fail for some proteins. To solve this problem a new search subspace was introduced which was found successful to localize LBS in most of the proteins used in this work for which cavity-based method MetaPocket 2.0 failed. Therefore this work appeared to augment well the existing binding site predictive methods through its applicability for complementary set of proteins for which cavity-based methods might fail. Also, to decide on the proteins for which instead of cavity-subspace the new subspace should be explored, a decision framework based on simple heuristic is made which uses geometric parameters of cavities extracted through MetaPocket 2.0. It is found that option for selecting the new or cavity-search subspace can be predicted correctly for nearly 87.5% of test proteins.
Ischemic preconditioning enhances integrity of coronary endothelial tight junctions
Li, Zhao; Jin, Zhu-Qiu
2012-08-31
Highlights: Black-Right-Pointing-Pointer Cardiac tight junctions are present between coronary endothelial cells. Black-Right-Pointing-Pointer Ischemic preconditioning preserves the structural and functional integrity of tight junctions. Black-Right-Pointing-Pointer Myocardial edema is prevented in hearts subjected to ischemic preconditioning. Black-Right-Pointing-Pointer Ischemic preconditioning enhances translocation of ZO-2 from cytosol to cytoskeleton. -- Abstract: Ischemic preconditioning (IPC) is one of the most effective procedures known to protect hearts against ischemia/reperfusion (IR) injury. Tight junction (TJ) barriers occur between coronary endothelial cells. TJs provide barrier function to maintain the homeostasis of the inner environment of tissues. However, the effect of IPC on the structure and function of cardiac TJs remains unknown. We tested the hypothesis that myocardial IR injury ruptures the structure of TJs and impairs endothelial permeability whereas IPC preserves the structural and functional integrity of TJs in the blood-heart barrier. Langendorff hearts from C57BL/6J mice were prepared and perfused with Krebs-Henseleit buffer. Cardiac function, creatine kinase release, and myocardial edema were measured. Cardiac TJ function was evaluated by measuring Evans blue-conjugated albumin (EBA) content in the extravascular compartment of hearts. Expression and translocation of zonula occludens (ZO)-2 in IR and IPC hearts were detected with Western blot. A subset of hearts was processed for the observation of ultra-structure of cardiac TJs with transmission electron microscopy. There were clear TJs between coronary endothelial cells of mouse hearts. IR caused the collapse of TJs whereas IPC sustained the structure of TJs. IR increased extravascular EBA content in the heart and myocardial edema but decreased the expression of ZO-2 in the cytoskeleton. IPC maintained the structure of TJs. Cardiac EBA content and edema were reduced in IPC hearts. IPC
Weighted graph based ordering techniques for preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Clift, Simon S.; Tang, Wei-Pai
1994-01-01
We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.
Incomplete block factorization preconditioning for indefinite elliptic problems
Guo, Chun-Hua
1996-12-31
The application of the finite difference method to approximate the solution of an indefinite elliptic problem produces a linear system whose coefficient matrix is block tridiagonal and symmetric indefinite. Such a linear system can be solved efficiently by a conjugate residual method, particularly when combined with a good preconditioner. We show that specific incomplete block factorization exists for the indefinite matrix if the mesh size is reasonably small. And this factorization can serve as an efficient preconditioner. Some efforts are made to estimate the eigenvalues of the preconditioned matrix. Numerical results are also given.
Preconditioning with a decoupled rowwise ordering on the CM-5
Toledo, S.
1995-12-01
Decoupled rowwise ordering is an ordering scheme for 2-dimensional grids, which is tailored for preconditioning 5-point difference equations arising from discretizations of partial differential equations. This paper describes the ordering scheme and implementations of a conjugate gradient solver and SSOR preconditioners which use the decoupled rowwise and the red black ordering schemes on the CM-5 parallel supercomputer. The rowwise decoupled preconditioner leads to faster convergence than the red black preconditioner, and it reduces the solution time by a factor of 1.5 to 2.5 over a nonpreconditioned solver on a variety of test problems.
Preconditioning methods for ideal and multiphase fluid flows
NASA Astrophysics Data System (ADS)
Gupta, Ashish
The objective of this study is to develop a preconditioning method for ideal and multiphase multispecies compressible fluid flow solver using homogeneous equilibrium mixture model. The mathematical model for fluid flow going through phase change uses density and temperature in the formulation, where the density represents the multiphase mixture density. The change of phase of the fluid is then explicitly determined using the equation of state of the fluid, which only requires temperature and mixture density. The method developed is based on a finite-volume framework in which the numerical fluxes are computed using Roe's approximate Riemann solver and the modified Harten, Lax and Van-leer scheme (HLLC). All speed Roe and HLLC flux based schemes have been developed either by using preconditioning or by directly modifying dissipation to reduce the effect of acoustic speed in its numerical dissipation when Mach number decreases. Preconditioning proposed by Briley, Taylor and Whitfield, Eriksson and Turkel are studied in this research, where as low dissipation schemes proposed by Rieper and Thornber, Mosedale, Drikakis, Youngs and Williams are also considered. Various preconditioners are evaluated in terms of development, performance, accuracy and limitations in simulations at various Mach numbers. A generalized preconditioner is derived which possesses well conditioned eigensystem for multiphase multispecies flow simulations. Validation and verification of the solution procedure are carried out on several small model problems with comparison to experimental, theoretical, and other numerical results. Preconditioning methods are evaluated using three basic geometries; 1) bump in a channel 2) flow over a NACA0012 airfoil and 3) flow over a cylinder, which are then compared with theoretical and numerical results. Multiphase capabilities of the solver are evaluated in cryogenic and non-cryogenic conditions. For cryogenic conditions the solver is evaluated by predicting
Chiueh, C.C. . E-mail: chiueh@tmu.edu.tw; Andoh, Tsugunobu; Chock, P. Boon
2005-09-01
Hormesis, a stress tolerance, can be induced by ischemic preconditioning stress. In addition to preconditioning, it may be induced by other means, such as gas anesthetics. Preconditioning mechanisms, which may be mediated by reprogramming survival genes and proteins, are obscure. A known neurotoxicant, 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), causes less neurotoxicity in the mice that are preconditioned. Pharmacological evidences suggest that the signaling pathway of {center_dot}NO-cGMP-PKG (protein kinase G) may mediate preconditioning phenomenon. We developed a human SH-SY5Y cell model for investigating {sup {center_dot}}NO-mediated signaling pathway, gene regulation, and protein expression following a sublethal preconditioning stress caused by a brief 2-h serum deprivation. Preconditioned human SH-SY5Y cells are more resistant against severe oxidative stress and apoptosis caused by lethal serum deprivation and 1-mehtyl-4-phenylpyridinium (MPP{sup +}). Both sublethal and lethal oxidative stress caused by serum withdrawal increased neuronal nitric oxide synthase (nNOS/NOS1) expression and {sup {center_dot}}NO levels to a similar extent. In addition to free radical scavengers, inhibition of nNOS, guanylyl cyclase, and PKG blocks hormesis induced by preconditioning. S-nitrosothiols and 6-Br-cGMP produce a cytoprotection mimicking the action of preconditioning tolerance. There are two distinct cGMP-mediated survival pathways: (i) the up-regulation of a redox protein thioredoxin (Trx) for elevating mitochondrial levels of antioxidant protein Mn superoxide dismutase (MnSOD) and antiapoptotic protein Bcl-2, and (ii) the activation of mitochondrial ATP-sensitive potassium channels [K(ATP)]. Preconditioning induction of Trx increased tolerance against MPP{sup +}, which was blocked by Trx mRNA antisense oligonucleotide and Trx reductase inhibitor. It is concluded that Trx plays a pivotal role in {sup {center_dot}}NO-dependent preconditioning hormesis against
A fast method for a generalized nonlocal elastic model
NASA Astrophysics Data System (ADS)
Du, Ning; Wang, Hong; Wang, Che
2015-09-01
We develop a numerical method for a generalized nonlocal elastic model, which is expressed as a composition of a Riesz potential operator with a fractional differential operator, by composing a collocation method with a finite difference discretization. By carefully exploring the structure of the coefficient matrix of the numerical method, we develop a preconditioned fast Krylov subspace method, which reduces the computations to (Nlog N) per iteration and the memory to O (N). The use of the preconditioner significantly reduces the number of iterations, and the preconditioner can be inverted in O (Nlog N) computations. Numerical results show the utility of the method.
Support vector machine classifiers for large data sets.
Gertz, E. M.; Griffin, J. D.
2006-01-31
This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.
Revised numerical wrapper for PIES code
NASA Astrophysics Data System (ADS)
Raburn, Daniel; Reiman, Allan; Monticello, Donald
2015-11-01
A revised external numerical wrapper has been developed for the Princeton Iterative Equilibrium Solver (PIES code), which is capable of calculating 3D MHD equilibria with islands. The numerical wrapper has been demonstrated to greatly improve the rate of convergence in numerous cases corresponding to equilibria in the TFTR device where magnetic islands are present. The numerical wrapper makes use of a Jacobian-free Newton-Krylov solver along with adaptive preconditioning and a sophisticated subspace-restricted Levenberg-Marquardt backtracking algorithm. The details of the numerical wrapper and several sample results are presented.
Evaluating Sparse Linear System Solvers on Scalable Parallel Architectures
2008-10-01
iterations will be necessary to assure sufficient accuracy whenever we do not use a direct method to solve (1.3) or (1.5). The overall SPIKE algorithm...boosting is activated, SPIKE is not used as a direct solver but rather as a preconditioner. In this case outer iterations via a Krylov subspace method ...robustness. Preconditioning aims to improve the robustness of iterative methods by transforming the system into M−1Ax = M−1f, or AM−1(Mx) = f. (3.2
A new approach to the solution of boundary value problems involving complex configurations
NASA Technical Reports Server (NTRS)
Rubbert, P. E.; Bussoletti, J. E.; Johnson, F. T.; Sidwell, K. W.; Rowe, W. S.; Samant, S. S.; Sengupta, G.; Weatherill, W. H.; Burkhart, R. H.; Woo, A. C.
1986-01-01
A new approach for solving certain types of boundary value problems about complex configurations is presented. Numerical algorithms from such diverse fields as finite elements, preconditioned Krylov subspace methods, discrete Fourier analysis, and integral equations are combined to take advantage of the memory, speed and architecture of current and emerging supercomputers. Although the approach has application to many branches of computational physics, the present effort is concentrated in areas of Computational Fluid Dynamics (CFD) such as steady nonlinear aerodynamics, time harmonic unsteady aerodynamics, and aeroacoustics. The most significant attribute of the approach is that it can handle truly arbitrary boundary geometries and eliminates the difficult task of generating surface fitted grids.
NASA Astrophysics Data System (ADS)
Bosch, Jessica; Stoll, Martin; Benner, Peter
2014-04-01
We consider the efficient solution of the Cahn-Hilliard variational inequality using an implicit time discretization, which is formulated as an optimal control problem with pointwise constraints on the control. By applying a semi-smooth Newton method combined with a Moreau-Yosida regularization technique for handling the control constraints we show superlinear convergence in function space. At the heart of this method lies the solution of large and sparse linear systems for which we propose the use of preconditioned Krylov subspace solvers using an effective Schur complement approximation. Numerical results illustrate the competitiveness of this approach.
NASA Astrophysics Data System (ADS)
Muthuvalu, Mohana Sundaram
2016-06-01
In this paper, performance analysis of the preconditioned Gauss-Seidel iterative methods for solving dense linear system arise from Fredholm integral equations of the second kind is investigated. The formulation and implementation of the preconditioned Gauss-Seidel methods are presented. Numerical results are included in order to verify the performance of the methods.
40 CFR 85.2218 - Preconditioned idle test-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Tests § 85.2218 Preconditioned idle test—EPA 91. (a) General requirements—(1) Exhaust gas...
40 CFR 85.2218 - Preconditioned idle test-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Preconditioned idle test-EPA 91. 85.2218 Section 85.2218 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Tests § 85.2218 Preconditioned idle test—EPA 91. (a) General requirements—(1) Exhaust gas...
40 CFR 85.2220 - Preconditioned two speed idle test-EPA 91.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Preconditioned two speed idle test-EPA 91. 85.2220 Section 85.2220 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Warranty Short Tests § 85.2220 Preconditioned two speed idle test—EPA 91. (a) General...
Sensory Preconditioning in Newborn Rabbits: From Common to Distinct Odor Memories
ERIC Educational Resources Information Center
Coureaud, Gerard; Tourat, Audrey; Ferreira, Guillaume
2013-01-01
This study evaluated whether olfactory preconditioning is functional in newborn rabbits and based on joined or independent memory of odorants. First, after exposure to odorants A+B, the conditioning of A led to high responsiveness to odorant B. Second, responsiveness to B persisted after amnesia of A. Third, preconditioning was also functional…
Condition number analysis and preconditioning of the finite cell method
NASA Astrophysics Data System (ADS)
de Prenter, F.; Verhoosel, C. V.; van Zwieten, G. J.; van Brummelen, E. H.
2017-04-01
The (Isogeometric) Finite Cell Method - in which a domain is immersed in a structured background mesh - suffers from conditioning problems when cells with small volume fractions occur. In this contribution, we establish a rigorous scaling relation between the condition number of (I)FCM system matrices and the smallest cell volume fraction. Ill-conditioning stems either from basis functions being small on cells with small volume fractions, or from basis functions being nearly linearly dependent on such cells. Based on these two sources of ill-conditioning, an algebraic preconditioning technique is developed, which is referred to as Symmetric Incomplete Permuted Inverse Cholesky (SIPIC). A detailed numerical investigation of the effectivity of the SIPIC preconditioner in improving (I)FCM condition numbers and in improving the convergence speed and accuracy of iterative solvers is presented for the Poisson problem and for two- and three-dimensional problems in linear elasticity, in which Nitche's method is applied in either the normal or tangential direction. The accuracy of the preconditioned iterative solver enables mesh convergence studies of the finite cell method.
Parallelizable approximate solvers for recursions arising in preconditioning
Shapira, Y.
1996-12-31
For the recursions used in the Modified Incomplete LU (MILU) preconditioner, namely, the incomplete decomposition, forward elimination and back substitution processes, a parallelizable approximate solver is presented. The present analysis shows that the solutions of the recursions depend only weakly on their initial conditions and may be interpreted to indicate that the inexact solution is close, in some sense, to the exact one. The method is based on a domain decomposition approach, suitable for parallel implementations with message passing architectures. It requires a fixed number of communication steps per preconditioned iteration, independently of the number of subdomains or the size of the problem. The overlapping subdomains are either cubes (suitable for mesh-connected arrays of processors) or constructed by the data-flow rule of the recursions (suitable for line-connected arrays with possibly SIMD or vector processors). Numerical examples show that, in both cases, the overhead in the number of iterations required for convergence of the preconditioned iteration is small relatively to the speed-up gained.
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Acupuncture for preconditioning of expectancy and/or Pavlovian extinction.
Lundeberg, Thomas; Lund, Iréne
2008-12-01
Both specific and non-specific factors, as well as the therapist, may play a role in acupuncture therapy. Recent results suggest that verum acupuncture has specific physiological effects and that patients expectations and belief regarding a potentially beneficial treatment modulate activity in the reward and self-appraisal systems in the brain. We suggest that acupuncture treatment may partly be regarded and used as an intervention that preconditions expectancy, which results in both conditional reflexes and conditioning of expected reward and self-appraisal. If so, acupuncture should preferably be applied before the start of the specific treatment (drug or behavioural intervention which is given with the intention of achieving a specific outcome) to enhance the specific and non-specific effects. This hypothesis is further supported by the suggestions that acupuncture may be viewed as a neural stimulus that triggers Pavlovian extinction. If this is the case, acupuncture should preferably be applied repeatedly (ie in a learning process) before the start of the specific treatment to initiate the extinction of previous unpleasant associations like pain or anxiety. Our clinical data suggest that acupuncture may precondition expectancy and conditional reflexes as well as induce Pavlovian extinction. Based on the above we suggest that acupuncture should be tried (as an adjunct) before any specific therapy.
Sound preconditioning therapy inhibits ototoxic hearing loss in mice.
Roy, Soumen; Ryals, Matthew M; Van den Bruele, Astrid Botty; Fitzgerald, Tracy S; Cunningham, Lisa L
2013-11-01
Therapeutic drugs with ototoxic side effects cause significant hearing loss for thousands of patients annually. Two major classes of ototoxic drugs are cisplatin and the aminoglycoside antibiotics, both of which are toxic to mechanosensory hair cells, the receptor cells of the inner ear. A critical need exists for therapies that protect the inner ear without inhibiting the therapeutic efficacy of these drugs. The induction of heat shock proteins (HSPs) inhibits both aminoglycoside- and cisplatin-induced hair cell death and hearing loss. We hypothesized that exposure to sound that is titrated to stress the inner ear without causing permanent damage would induce HSPs in the cochlea and inhibit ototoxic drug–induced hearing loss. We developed a sound exposure protocol that induces HSPs without causing permanent hearing loss. We used this protocol in conjunction with a newly developed mouse model of cisplatin ototoxicity and found that preconditioning mouse inner ears with sound has a robust protective effect against cisplatin-induced hearing loss and hair cell death. Sound therapy also provided protection against aminoglycoside-induced hearing loss. These data indicate that sound preconditioning protects against both classes of ototoxic drugs, and they suggest that sound therapy holds promise for preventing hearing loss in patients receiving these drugs.
Ischemic preconditioning stimulates sodium and proton transport in isolated rat hearts.
Ramasamy, R; Liu, H; Anderson, S; Lundmark, J; Schaefer, S
1995-01-01
One or more brief periods of ischemia, termed preconditioning, dramatically limits infarct size and reduces intracellular acidosis during subsequent ischemia, potentially via enhanced sarcolemmal proton efflux mechanisms. To test the hypothesis that preconditioning increases the functional activity of sodium-dependent proton efflux pathways, isolated rat hearts were subjected to 30 min of global ischemia with or without preconditioning. Intracellular sodium (Nai) was assessed using 23Na magnetic resonance spectroscopy, and the activity of the Na-H exchanger and Na-K-2Cl cotransporter was measured by transiently exposing the hearts to an acid load (NH4Cl washout). Creatine kinase release was reduced by greater than 60% in the preconditioned hearts (P < 0.05) and was associated with improved functional recovery on reperfusion. Preconditioning increased Nai by 6.24 +/- 2.04 U, resulting in a significantly higher level of Nai before ischemia than in the control hearts. Nai increased significantly at the onset of ischemia (8.48 +/- 1.21 vs. 2.57 +/- 0.81 U, preconditioned vs. control hearts; P < 0.01). Preconditioning did not reduce Nai accumulation during ischemia, but the decline in Nai during the first 5 min of reperfusion was significantly greater in the preconditioned than in the control hearts (13.48 +/- 1.73 vs. 2.54 +/- 0.41 U; P < 0.001). Exposure of preconditioned hearts to ethylisopropylamiloride or bumetanide in the last reperfusion period limited in the increase in Nai during ischemia and reduced the beneficial effects of preconditioning. After the NH4Cl prepulse, preconditioned hearts acidified significantly more than control hearts and had significantly more rapid recovery of pH (preconditioned, delta pH = 0.35 +/- 0.04 U over 5 min; control, delta pH = 0.15 +/- 0.02 U over 5 min). This rapid pH recovery was not affected by inhibition of the Na-K-2Cl cotransporter but was abolished by inhibition of the Na-H exchanger. These results demonstrate that
Kamon, M.; Phillips, J.R.
1994-12-31
In this paper techniques are presented for preconditioning equations generated by discretizing constrained vector integral equations associated with magnetoquasistatic analysis. Standard preconditioning approaches often fail on these problems. The authors present a specialized preconditioning technique and prove convergence bounds independent of the constraint equations and electromagnetic excitation frequency. Computational results from analyzing several electronic packaging examples are given to demonstrate that the new preconditioning approach can sometimes reduce the number of GMRES iterations by more than an order of magnitude.
Severino, Patricia Cardoso; Muller, Gabriele do Amaral Silva; Vandresen-Filho, Samuel; Tasca, Carla Inês
2011-10-10
The search for novel, less invasive therapeutic strategies to treat neurodegenerative diseases has stimulated scientists to investigate the mechanisms involved in preconditioning. Preconditioning has been report to occur in many organs and tissues. In the brain, the modulation of glutamatergic transmission is an important and promising target to the use of effective neuroprotective agents. The glutamatergic excitotoxicity is a factor common to neurodegenerative diseases and acute events such as cerebral ischemia, traumatic brain injury and epilepsy. In this review we focus on the neuroprotection and preconditioning by chemical agents. Specially, chemical preconditioning models using N-methyl-d-aspartate (NMDA) pre-treatment, which has demonstrated to lead to neuroprotection against seizures and damage to neuronal tissue induced by quinolinic acid (QA). Here we attempted to gather important results obtained in the study of cellular and molecular mechanisms involved in NMDA preconditioning and neuroprotection.
Pradillo, J M; Hurtado, O; Romera, C; Cárdenas, A; Fernández-Tomé, P; Alonso-Escolano, D; Lorenzo, P; Moro, M A; Lizasoain, I
2006-01-01
A short ischemic event (ischemic preconditioning) can result in subsequent resistance to severe ischemic injury (ischemic tolerance). Glutamate is released after ischemia and produces cell death. It has been described that after ischemic preconditioning, the release of glutamate is reduced. We have shown that an in vitro model of ischemic preconditioning produces upregulation of glutamate transporters which mediates brain tolerance. We have now decided to investigate whether ischemic preconditioning-induced glutamate transporter upregulation takes also place in vivo, its cellular localization and the mechanisms by which this upregulation is controlled. A period of 10 min of temporary middle cerebral artery occlusion was used as a model of ischemic preconditioning in rat. EAAT1, EAAT2 and EAAT3 glutamate transporters were found in brain from control animals. Ischemic preconditioning produced an up-regulation of EAAT2 and EAAT3 but not of EAAT1 expression. Ischemic preconditioning-induced increase in EAAT3 expression was reduced by the TNF-alpha converting enzyme inhibitor BB1101. Intracerebral administration of either anti-TNF-alpha antibody or of a TNFR1 antisense oligodeoxynucleotide also inhibited ischemic preconditioning-induced EAAT3 up-regulation. Immunohistochemical studies suggest that, whereas the expression of EAAT3 is located in both neuronal cytoplasm and plasma membrane, ischemic preconditioning-induced up-regulation of EAAT3 is mainly localized at the plasma membrane level. In summary, these results demonstrate that in vivo ischemic preconditioning increases the expression of EAAT2 and EAAT3 glutamate transporters the upregulation of the latter being at least partly mediated by TNF-alpha converting enzyme/TNF-alpha/TNFR1 pathway.
Argon Induces Protective Effects in Cardiomyocytes during the Second Window of Preconditioning.
Mayer, Britta; Soppert, Josefin; Kraemer, Sandra; Schemmel, Sabrina; Beckers, Christian; Bleilevens, Christian; Rossaint, Rolf; Coburn, Mark; Goetzenich, Andreas; Stoppe, Christian
2016-07-19
Increasing evidence indicates that argon has organoprotective properties. So far, the underlying mechanisms remain poorly understood. Therefore, we investigated the effect of argon preconditioning in cardiomyocytes within the first and second window of preconditioning. Primary isolated cardiomyocytes from neonatal rats were subjected to 50% argon for 1 h, and subsequently exposed to a sublethal dosage of hypoxia (<1% O₂) for 5 h either within the first (0-3 h) or second window (24-48 h) of preconditioning. Subsequently, the cell viability and proliferation was measured. The argon-induced effects were assessed by evaluation of mRNA and protein expression after preconditioning. Argon preconditioning did not show any cardioprotective effects in the early window of preconditioning, whereas it leads to a significant increase of cell viability 24 h after preconditioning compared to untreated cells (p = 0.015) independent of proliferation. Argon-preconditioning significantly increased the mRNA expression of heat shock protein (HSP) B1 (HSP27) (p = 0.048), superoxide dismutase 2 (SOD2) (p = 0.001), vascular endothelial growth factor (VEGF) (p < 0.001) and inducible nitric oxide synthase (iNOS) (p = 0.001). No difference was found with respect to activation of pro-survival kinases in the early and late window of preconditioning. The findings provide the first evidence of argon-induced effects on the survival of cardiomyocytes during the second window of preconditioning, which may be mediated through the induction of HSP27, SOD2, VEGF and iNOS.
Clinical Application of Preconditioning and Postconditioning to Achieve Neuroprotection
Dezfulian, Cameron; Garrett, Matthew; Gonzalez, Nestor R.
2012-01-01
Ischemic conditioning is a form of endogenous protection induced by transient, subcritical ischemia in a tissue. Organs with high sensitivity to ischemia, such as the heart, the brain, and spinal cord represent the most critical and potentially promising targets for potential therapeutic applications of ischemic conditioning. Numerous preclinical investigations have systematically studied the molecular pathways and potential benefits of both pre- and post-conditioning with promising results. The purpose of this review is to summarize the present knowledge on cerebral pre-and post-conditioning, with an emphasis in the clinical application of these forms of neuroprotection. Methods A systematic Medline search for the terms preconditioning and postconditioning was performed. Publications related to the nervous system and to human applications were selected and analyzed. Findings Pre-and post-conditioning appear to provide similar levels of neuroprotection. The preconditioning window of benefit can be subdivided into early and late effects, depending on whether the effect appears immediately after the sublethal stress or with a delay of days. In general early effects have been associated post-translational modification of critical proteins (membrane receptors, mitochondrial respiratory chain) while late effects are the result of gene up-or down-regulation. Transient ischemic attacks appear to represent a form of clinically relevant preconditioning by inducing ischemic tolerance in the brain and reducing the severity of subsequent strokes. Remote forms of ischemic pre- and post-conditioning have been more commonly used in clinical studies, as the remote application reduces the risk of injuring the target tissue for which protection is pursued. Limb transient ischemia is the preferred method of induction of remote conditioning with evidence supporting its safety. Clinical studies in a variety of populations at risk of central nervous damage including carotid disease
Bernsen, Erik; Dijkstra, Henk A.; Thies, Jonas; Wubs, Fred W.
2010-10-20
In present-day forward time stepping ocean-climate models, capturing both the wind-driven and thermohaline components, a substantial amount of CPU time is needed in a so-called spin-up simulation to determine an equilibrium solution. In this paper, we present methodology based on Jacobian-Free Newton-Krylov methods to reduce the computational time for such a spin-up problem. We apply the method to an idealized configuration of a state-of-the-art ocean model, the Modular Ocean Model version 4 (MOM4). It is shown that a typical speed-up of a factor 10-25 with respect to the original MOM4 code can be achieved and that this speed-up increases with increasing horizontal resolution.
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan
2017-09-01
It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus
NASA Astrophysics Data System (ADS)
Xu, Y.; Tuttas, S.; Heogner, L.; Stilla, U.
2016-06-01
This paper presents an approach for the classification of photogrammetric point clouds of scaffolding components in a construction site, aiming at making a preparation for the automatic monitoring of construction site by reconstructing an as-built Building Information Model (as-built BIM). The points belonging to tubes and toeboards of scaffolds will be distinguished via subspace clustering process and principal components analysis (PCA) algorithm. The overall workflow includes four essential processing steps. Initially, the spherical support region of each point is selected. In the second step, the normalized cut algorithm based on spectral clustering theory is introduced for the subspace clustering, so as to select suitable subspace clusters of points and avoid outliers. Then, in the third step, the feature of each point is calculated by measuring distances between points and the plane of local reference frame defined by PCA in cluster. Finally, the types of points are distinguished and labelled through a supervised classification method, with random forest algorithm used. The effectiveness and applicability of the proposed steps are investigated in both simulated test data and real scenario. The results obtained by the two experiments reveal that the proposed approaches are qualified to the classification of points belonging to linear shape objects having different shapes of sections. For the tests using synthetic point cloud, the classification accuracy can reach 80%, with the condition contaminated by noise and outliers. For the application in real scenario, our method can also achieve a classification accuracy of better than 63%, without using any information about the normal vector of local surface.
Ogawa, Takahiro; Haseyama, Miki
2016-10-10
This paper presents adaptive subspace-based inverse projections via division into multiple sub-problems (ASIP-DIMS) for missing image data restoration. In the proposed method, a target problem for estimating missing image data is divided into multiple sub-problems, and each sub-problem is iteratively solved with constraints of other known image data. By projection into a subspace model of image patches, the solution of each subproblem is calculated, where we call this procedure "subspacebased inverse projection" for simplicity. The proposed method can use higher-dimensional subspaces for finding unique solutions in each sub-problem, and successful restoration becomes feasible since a high level of image representation performance can be preserved. This is the biggest contribution of this paper. Furthermore, the proposed method generates several subspaces from known training examples and enables derivation of a new criterion in the above framework to adaptively select the optimal subspace for each target patch. In this way, the proposed method realizes missing image data restoration using ASIP-DIMS. Since our method can estimate any kind of missing image data, its potential in two image restoration tasks, image inpainting and super-resolution, based on several methods for multivariate analysis is also shown in this paper.
A subspace-based parameter estimation algorithm for Nakagami-m fading channels
NASA Astrophysics Data System (ADS)
Dianat, Sohail; Rao, Raghuveer
2010-04-01
Estimation of channel fading parameters is an important task in the design of communication links such as maximum ratio combining (MRC). The MRC weights are directly related to the fading channel coefficients. In this paper, we propose a subspace based parameter estimation algorithm for the estimation of the parameters of Nakagami-m fading channels in the presence of additive white Gaussian noise. Comparisons of our proposed approach are made with other techniques available in the literature. The performance of the algorithm with respect to the Cramer-Rao bound (CRB) is investigated. Computer simulation results for different signal to noise ratios (SNR) are presented.
Slater Functions for Y to Cd Atoms by the Distance between Subspaces
NASA Astrophysics Data System (ADS)
de la Vega, J. M. García.; Miguel, B.
1995-05-01
Slater functions for the atoms Y-Cd have been formulated by the distance between subspaces method. Basis sets proposed here are single- and double-zeta size and have been constructed using numerical Hartree-Fock functions as reference. A comparative study with Clementi and Roetti basis sets of the same size has been carried out, obtaining a uniform criterion for the behavior of the series of atoms Y-Cd when the number of d electrons is varied. The new basis sets provide a better simulation of some atomic properties and appear to be appropriate for molecular and solid state calculations.
The Subspace Projected Approximate Matrix (SPAM) Modification of the Davidson Method
NASA Astrophysics Data System (ADS)
Shepard, Ron; Wagner, Albert F.; Tilson, Jeffrey L.; Minkoff, Michael
2001-09-01
A modification of the iterative matrix diagonalization method of Davidson is presented that is applicable to the symmetric eigenvalue problem. This method is based on subspace projections of a sequence of one or more approximate matrices. The purpose of these approximate matrices is to improve the efficiency of the solution of the desired eigenpairs by reducing the number of matrix-vector products that must be computed with the exact matrix. Several applications are presented. These are chosen to show the range of applicability of the method, the convergence behavior for a wide range of matrix types, and also the wide range of approaches that may be employed to generate approximate matrices.
Universal quantum computation in a neutral-atom decoherence-free subspace
Brion, E.; Pedersen, L. H.; Moelmer, K.; Chutia, S.; Saffman, M.
2007-03-15
In this paper, we propose a way to achieve protected universal computation in a neutral-atom quantum computer subject to collective dephasing. Our proposal relies on the existence of a decoherence-free subspace (DFS), resulting from symmetry properties of the errors. After briefly describing the physical system and the error model considered, we show how to encode information into the DFS and build a complete set of safe universal gates. Finally, we provide numerical simulations for the fidelity of the different gates in the presence of time-dependent phase errors and discuss their performance and practical feasibility.
Random subspaces for encryption based on a private shared Cartesian frame
Bartlett, Stephen D.; Hayden, Patrick; Spekkens, Robert W.
2005-11-15
A private shared Cartesian frame is a novel form of private shared correlation that allows for both private classical and quantum communication. Cryptography using a private shared Cartesian frame has the remarkable property that asymptotically, if perfect privacy is demanded, the private classical capacity is three times the private quantum capacity. We demonstrate that if the requirement for perfect privacy is relaxed, then it is possible to use the properties of random subspaces to nearly triple the private quantum capacity, almost closing the gap between the private classical and quantum capacities.
Aerodynamic shape optimization using preconditioned conjugate gradient methods
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1993-01-01
In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.
Hyperbaric oxygen therapy and preconditioning for ischemic and hemorrhagic stroke
Hu, Sheng-li; Feng, Hua; Xi, Guo-hua
2016-01-01
To date, the therapeutic methods for ischemic and hemorrhagic stroke are still limited. The lack of oxygen supply is critical for brain injury following stroke. Hyperbaric oxygen (HBO), an approach through a process in which patients breathe in 100% pure oxygen at over 101 kPa, has been shown to facilitate oxygen delivery and increase oxygen supply. Hence, HBO possesses the potentials to produce beneficial effects on stroke. Actually, accumulated basic and clinical evidences have demonstrated that HBO therapy and preconditioning could induce neuroprotective functions via different mechanisms. Nevertheless, the lack of clinical translational study limits the application of HBO. More translational studies and clinical trials are needed in the future to develop effective HBO protocols. PMID:28217297
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
Is There A Place For Cerebral Preconditioning In The Clinic?
Keep, Richard F.; Wang, Michael M.; Xiang, Jianming; Hua, Ya; Xi, Guohua
2010-01-01
Preconditioning (PC) describes a phenomenon whereby a sub-injury inducing stress can protect against a later injurious stress. Great strides have been made in identifying the mechanisms of PC-induced protection in animal models of brain injury. While these may help elucidate potential therapeutic targets, there are questions over the clinical utility of cerebral PC, primarily because of questions over the need to give the PC stimulus prior to the injury, narrow therapeutic windows and safety. The object of this review is to address the question of whether there may indeed be a clinical use for cerebral PC and to discuss the deficiencies in our knowledge of PC that may hamper such clinical translation. PMID:20563278
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
Parallel preconditioning for the solution of nonsymmetric banded linear systems
Amodio, P.; Mazzia, F.
1994-12-31
Many computational techniques require the solution of banded linear systems. Common examples derive from the solution of partial differential equations and of boundary value problems. In particular the authors are interested in the parallel solution of block Hessemberg linear systems Gx = f, arising from the solution of ordinary differential equations by means of boundary value methods (BVMs), even if the considered preconditioning may be applied to any block banded linear system. BVMs have been extensively investigated in the last few years and their stability properties give promising results. A new class of BVMs called Reverse Adams, which are BV-A-stable for orders up to 6, and BV-A{sub 0}-stable for orders up to 9, have been studied.
Mohammadzadeh, Alireza; Jafari, Naser; Babapoursaatlou, Behzad; Doustkami, Hossein; Hosseinian, Adallat; Hasanpour, Mohammad
2012-01-01
The present study has investigated the effectiveness of staged-preconditioning, in both remote and target organs. After IP the myocardial release of the biochemical markers including, creatine phosphokinase (CPK), cardiac creatine kinase (CK-MB), cardiac troponin T (cTnT) and lactate dehydrogenase (LDH) were evaluated in patients who underwent CABG, with and without staged-preconditioning. Sixty-one patients entered the study; there were 32 patients in the staged-preconditioning group and 29 patients in the control group. All patients underwent on-pump CABG using cardiopulmonary bypass (CPB) techniques. In the staged-preconditioning group, patients underwent two stages of IP on remote (upper limb) and target organs. Each stage of preconditioning was carried out by 3 cycles of ischemia and then reperfusion. Serum levels of biochemical markers were immediately measured postoperatively at 24, 48 and 72 h. Serum CK-MB, CPK and LDH levels were significantly lower in the staged-preconditioning group than in the control group. The CK-MB release in the staged-preconditioning patients reduced by 51% in comparison with controls over 72 h after CABG. These results suggest that myocardial injury was attenuated by the effect of three rounds of both remote and target organ IP.
NASA Astrophysics Data System (ADS)
Warner, Dennis B.
1984-02-01
Recognition of the socioeconomic preconditions for successful rural water-supply and sanitation projects in developing countries is the key to identifying a new project. Preconditions are the social, economic and technical characteristics defining the project environment. There are two basic types of preconditions: those existing at the time of the initial investigation and those induced by subsequent project activities. Successful project identification is dependent upon an accurate recognition of existing constraints and a carefully tailored package of complementary investments intended to overcome the constraints. This paper discusses the socioeconomic aspects of preconditions in the context of a five-step procedure for project identification. The procedure includes: (1) problem identification; (2) determination of socioeconomic status; (3) technology selection; (4) utilization of support conditions; and (5) benefit estimation. Although the establishment of specific preconditions should be based upon the types of projects likely to be implemented, the paper outlines a number of general relationships regarding favourable preconditions in water and sanitation planning. These relationships are used within the above five-step procedure to develop a set of general guidelines for the application of preconditions in the identification of rural water-supply and sanitation projects.
Exercise preconditioning of myocardial infarct size in dogs is triggered by calcium.
Parra, Víctor M; Macho, Pilar; Sánchez, Gina; Donoso, Paulina; Domenech, Raúl J
2015-03-01
We showed that exercise induces early and late myocardial preconditioning in dogs and that these effects are mediated through nicotinamide adenine dinucleotide phosphate reduced form (NADPH) oxidase activation. As the intracoronary administration of calcium induces preconditioning and exercise enhances the calcium inflow to the cell, we studied if this effect of exercise triggers exercise preconditioning independently of its hemodynamic effects. We analyzed in 81 dogs the effect of blocking sarcolemmal L-type Ca channels with a low dose of verapamil on early and late preconditioning by exercise, and in other 50 dogs, we studied the effect of verapamil on NADPH oxidase activation in early exercise preconditioning. Exercise reduced myocardial infarct size by 76% and 52% (early and late windows respectively; P < 0.001 both), and these effects were abolished by a single low dose of verapamil given before exercise. This dose of verapamil did not modify the effect of exercise on metabolic and hemodynamic parameters. In addition, verapamil blocked the activation of NADPH oxidase during early preconditioning. The protective effect of exercise preconditioning on myocardial infarct size is triggered, at least in part, by calcium inflow increase to the cell during exercise and, during the early window, is mediated by NADPH oxidase activation.
Selvaraj, Uma Maheswari; Ortega, Sterling B; Hu, Ruilong; Gilchrist, Robert; Kong, Xiangmei; Partin, Alexander; Plautz, Erik J; Klein, Robyn S; Gidday, Jeffrey M; Stowe, Ann M
2017-03-01
Repetitive hypoxic preconditioning creates long-lasting, endogenous protection in a mouse model of stroke, characterized by reductions in leukocyte-endothelial adherence, inflammation, and infarct volumes. The constitutively expressed chemokine CXCL12 can be upregulated by hypoxia and limits leukocyte entry into brain parenchyma during central nervous system inflammatory autoimmune disease. We therefore hypothesized that the sustained tolerance to stroke induced by repetitive hypoxic preconditioning is mediated, in part, by long-term CXCL12 upregulation at the blood-brain barrier (BBB). In male Swiss Webster mice, repetitive hypoxic preconditioning elevated cortical CXCL12 protein levels, and the number of cortical CXCL12+ microvessels, for at least two weeks after the last hypoxic exposure. Repetitive hypoxic preconditioning-treated mice maintained more CXCL12-positive vessels than untreated controls following transient focal stroke, despite cortical decreases in CXCL12 mRNA and protein. Continuous administration of the CXCL12 receptor (CXCR4) antagonist AMD3100 for two weeks following repetitive hypoxic preconditioning countered the increase in CXCL12-positive microvessels, both prior to and following stroke. AMD3100 blocked the protective post-stroke reductions in leukocyte diapedesis, including macrophages and NK cells, and blocked the protective effect of repetitive hypoxic preconditioning on lesion volume, but had no effect on blood-brain barrier dysfunction. These data suggest that CXCL12 upregulation prior to stroke onset, and its actions following stroke, contribute to the endogenous, anti-inflammatory phenotype induced by repetitive hypoxic preconditioning.
Can anaerobic performance be improved by remote ischemic preconditioning?
Lalonde, François; Curnier, Daniel Y
2015-01-01
Remote ischemic preconditioning (RIPC) provides a substantial benefit for heart protection during surgery. Recent literature on RIPC reveals the potential to benefit the enhancement of sports performance as well. The aim of this study was to investigate the effect of RIPC on anaerobic performance. Seventeen healthy participants who practice regular physical activity participated in the project (9 women and 8 men, mean age 28 ± 8 years). The participants were randomly assigned to an RIPC intervention (four 5-minute cycles of ischemia reperfusion, followed by 5 minutes using a pressure cuff) or a SHAM intervention in a crossover design. After the intervention, the participants were tested for alactic anaerobic performance (6 seconds of effort) followed by a Wingate test (lactic system) on an electromagnetic cycle ergometer. The following parameters were evaluated: average power, peak power, the scale of perceived exertion, fatigue index (in watt per second), peak power (in Watt), time to reach peak power (in seconds), minimum power (in Watt), the average power-to-weight ratio (in watt per kilogram), and the maximum power-to-weight ratio (in watt per kilogram). The peak power for the Wingate test is 794 W for RIPC and 777 W for the control group (p = 0.208). The average power is 529 W (RIPC) vs. 520 W for controls (p = 0.079). Perceived effort for RIPC is 9/10 on the Borg scale vs. 10/10 for the control group (p = 0.123). Remote ischemic preconditioning does not offer any significant benefits for anaerobic performance.
Whetzel, T P; Stevenson, T R; Sharman, R B; Carlsen, R C
1997-12-01
It has been well documented that ischemic preconditioning limits ischemic-reperfusion injury in cardiac muscle, but the ability of ischemic preconditioning to limit skeletal muscle injury is less clear. Previous reports have emphasized the beneficial effects of ischemic preconditioning on skeletal muscle structure and capillary perfusion but have not evaluated muscle function. We investigated the morphologic and functional consequences of ischemic preconditioning, followed by a 2-hour period of tourniquet ischemia on muscles in the rat hindlimb. The 2-hour ischemia was imposed without preconditioning, or was preceded by three brief (10 minutes on/10 minutes off) preischemic conditioning intervals. We compared muscle morphology, isometric contractile function, and muscle fatigue properties in predominantly fast-twitch, tibialis anterior muscles 3 (n = 8) and 7 (n = 8) days after ischemia-reperfusion. Two hours of ischemia, followed by reperfusion, results in a 20 percent reduction of muscle mass (p < 0.05) and a 33 percent reduction in tetanic tension (p < 0.05) when compared with controls (n = 8) at 3 days. The same protocol, when preceded by ischemic preconditioning, results in similar decreases in muscle mass and contractile function. Neuromuscular transmission was also impaired in both ischemic groups 7 days after ischemia. Nerve-evoked maximum tetanic tension was 69 percent of the tension produced by direct muscle stimulation in the ischemia group and 65 percent of direct tension in the ischemic preconditioning/ischemia group. In summary, ischemic preconditioning, using the same protocol reported to be effective in limiting infarct size in porcine muscle, had no significant benefit in limiting injury or improving recovery in the ischemic rat tibialis anterior. The value of ischemic preconditioning in reducing imposed ischemic-reperfusion-induced functional deficits in skeletal muscle remains to be demonstrated.
Saga, Norio; Katamoto, Shizuo; Naito, Hisashi
2008-01-01
The purpose of this study was to clarify whether heat preconditioning results in less eccentric exercise-induced muscle damage and muscle soreness, and whether the repeated bout effect is enhanced by heat preconditioning prior to eccentric exercise. Nine untrained male volunteers aged 23 ± 3 years participated in this study. Heat preconditioning included treatment with a microwave hyperthermia unit (150 W, 20 min) that was randomly applied to one of the subject’s arms (MW); the other arm was used as a control (CON). One day after heat preconditioning, the subjects performed 24 maximal isokinetic eccentric contractions of the elbow flexors at 30°·s-1 (ECC1). One week after ECC1, the subjects repeated the procedure (ECC2). After each bout of exercise, maximal voluntary contraction (MVC), range of motion (ROM) of the elbow joint, upper arm circumference, blood creatine kinase (CK) activity and muscle soreness were measured. The subjects experienced both conditions at an interval of 3 weeks. MVC and ROM in the MW were significantly higher than those in the CON (p < 0.05) for ECC1; however, the heat preconditioning had no significant effect on upper arm circumference, blood CK activity, or muscle soreness following ECC1 and ECC2. Heat preconditioning may protect human skeletal muscle from eccentric exercise-induced muscle damage after a single bout of eccentric exercise but does not appear to promote the repeated bout effect after a second bout of eccentric exercise. Key pointsThere have been few studies about the effects of heat preconditioning on muscle damage caused by eccentric exercise and the repeated bout effect after a second bout of eccentric exercise.Heat preconditioning with microwave hyperthermia may attenuate eccentric exercise-induced muscle damage.Heat preconditioning does not enhance the repeated bout effect. PMID:24150151
Fructose-1,6-biphosphate in rat intestinal preconditioning: involvement of nitric oxide
Sola, A; Rosello-Catafau, J; Gelpi, E; Hotter, G
2001-01-01
BACKGROUND AND AIMS—Inhibition of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) by nitric oxide (NO) in intestinal preconditioning could modify the rate of formation of glycolytic intermediates. Fructose-1,6-biphosphate (F16BP) is a glycolytic intermediate that protects tissue from ischaemia/reperfusion injury. We evaluated if F16BP may be endogenously accumulated as a consequence of GAPDH inhibition by NO during intestinal preconditioning in rats. METHODS—We assessed: (1) effect of preconditioning on F16BP content; (2) effect of NO on GAPDH activity before and during sustained ischaemia; and (3) protective effect of F16BP in control, ischaemic, and preconditioned animals with or without administration of N-nitro-L-arginine methyl ester (L-NAME), NO donor, or F16BP. RESULTS—Preconditioned rats showed a significant transient decrease in GAPDH activity and also maintained basal F16BP levels longer than ischaemic rats. L-NAME administration to preconditioned rats reversed these effects. F16BP administration to ischaemic rats decreased protein release in the perfusate. Administration of F16BP to L-NAME treated rats attenuated the harmful effect of L-NAME. CONCLUSIONS—Our study indicates that F16BP may be endogenously accumulated in preconditioned rats as a consequence of inhibition of GAPDH by NO, and this may contribute to the protection observed in intestinal preconditioning. Keywords: fructose-1,6-biphosphate; glyceraldehyde- 3-phosphate dehydrogenase; intestinal preconditioning; ischaemia/reperfusion injury; nitric oxide PMID:11156636
Stetler, R. Anne; Leak, Rehana K.; Gan, Yu; Li, Peiying; Hu, Xiaoming; Jing, Zheng; Chen, Jun; Zigmond, Michael J.; Gao, Yanqin
2014-01-01
Preconditioning is a phenomenon in which brief episodes of a sublethal insult induce robust protection against subsequent lethal injuries. Preconditioning has been observed in multiple organisms and can occur in the brain as well as other tissues. Extensive animal studies suggest that the brain can be preconditioned to resist acute injuries, such as ischemic stroke, neonatal hypoxia/ischemia, trauma, and agents that are used in models of neurodegenerative diseases, such as Parkinson’s disease and Alzheimer’s disease. Effective preconditioning stimuli are numerous and diverse, ranging from transient ischemia, hypoxia, hyperbaric oxygen, hypothermia and hyperthermia, to exposure to neurotoxins and pharmacological agents. The phenomenon of “cross-tolerance,” in which a sublethal stress protects against a different type of injury, suggests that different preconditioning stimuli may confer protection against a wide range of injuries. Research conducted over the past few decades indicates that brain preconditioning is complex, involving multiple effectors such as metabolic inhibition, activation of extra- and intracellular defense mechanisms, a shift in the neuronal excitatory/inhibitory balance, and reduction in inflammatory sequelae. An improved understanding of brain preconditioning should help us identify innovative therapeutic strategies that prevent or at least reduce neuronal damage in susceptible patients. In this review, we focus on the experimental evidence of preconditioning in the brain and systematically survey the models used to develop paradigms for neuroprotection, and then discuss the clinical potential of brain preconditioning. In a subsequent components of this two-part series, we will discuss the cellular and molecular events that are likely to underlie these phenomena. PMID:24389580
Poly-IC preconditioning protects against cerebral and renal ischemia-reperfusion injury.
Packard, Amy E B; Hedges, Jason C; Bahjat, Frances R; Stevens, Susan L; Conlin, Michael J; Salazar, Andres M; Stenzel-Poore, Mary P
2012-02-01
Preconditioning induces ischemic tolerance, which confers robust protection against ischemic damage. We show marked protection with polyinosinic polycytidylic acid (poly-IC) preconditioning in three models of murine ischemia-reperfusion injury. Poly-IC preconditioning induced protection against ischemia modeled in vitro in brain cortical cells and in vivo in models of brain ischemia and renal ischemia. Further, unlike other Toll-like receptor (TLR) ligands, which generally induce significant inflammatory responses, poly-IC elicits only modest systemic inflammation. Results show that poly-IC is a new powerful prophylactic treatment that offers promise as a clinical therapeutic strategy to minimize damage in patient populations at risk of ischemic injury.
Eigenmode Analysis of Boundary Conditions for One-Dimensional Preconditioned Euler Equations
NASA Technical Reports Server (NTRS)
Darmofal, David L.
1998-01-01
An analysis of the effect of local preconditioning on boundary conditions for the subsonic, one-dimensional Euler equations is presented. Decay rates for the eigenmodes of the initial boundary value problem are determined for different boundary conditions. Riemann invariant boundary conditions based on the unpreconditioned Euler equations are shown to be reflective with preconditioning, and, at low Mach numbers, disturbances do not decay. Other boundary conditions are investigated which are non-reflective with preconditioning and numerical results are presented confirming the analysis.
Characteristic time-stepping or local preconditioning of the Euler equations
NASA Technical Reports Server (NTRS)
Van Leer, Bram; Lee, Wen-Tzong; Roe, Philip L.
1991-01-01
A derivation is presented of a local preconditioning matrix for multidimensional Euler equations, that reduces the spread of the characteristic speeds to the lowest attainable value. Numerical experiments with this preconditioning matrix are applied to an explicit upwind discretization of the two-dimensional Euler equations, showing that this matrix significantly increases the rate of convergence to a steady solution. It is predicted that local preconditioning will also simplify convergence-acceleration boundary procedures such as the Karni (1991) procedure for the far field and the Mazaheri and Roe (1991) procedure for a solid wall.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2016-12-07
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Renaut, R.; He, Q.
1994-12-31
In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.
A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.
Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang
2014-02-25
In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
Separation of vibrating and static SAR object signatures via an orthogonal subspace transformation
NASA Astrophysics Data System (ADS)
Pepin, Matthew; Hayat, Majeed M.
2012-05-01
When vibrating objects are present in a Synthetic Aperture Radar image they induce a modulation in the pulse-to-pulse Doppler collected. At higher frequencies (up to a sampling limit dictated by half the PRF) the modulation is low amplitude due to physical limits of vibrating structures and swamped by the Doppler from static objects (clutter). This paper presents an orthogonal subspace transform that separates the modulation of a vibrating object from the static clutter. After the transformation the major frequencies of the vibration are estimated with asymptotically (as the number of pulses increases) decreasing variance and bias. Although the effects and SAR image artifacts from vibrating objects are widely known their utility has been limited to high signal-to-noise, low frequency vibrating objects. The method presented here lowers the minimum required signal-to-noise ratio of the vibrating object over other methods. Additionally vibrations over the full (azimuth- sampled) frequency range from one over the aperture time to the pulse repetition frequency (PRF) are equally measured with respect to the noise level at each specific frequency. After separation of the vibrating and static object signal sub-spaces any of the many spectral estimation methods can be applied to estimate the vibration spectrum.
Abdel-Khalik, Hany S.; Zhang, Qiong
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
High Resolution DOA Estimation Using Unwrapped Phase Information of MUSIC-Based Noise Subspace
NASA Astrophysics Data System (ADS)
Ichige, Koichi; Saito, Kazuhiko; Arai, Hiroyuki
This paper presents a high resolution Direction-Of-Arrival (DOA) estimation method using unwrapped phase information of MUSIC-based noise subspace. Superresolution DOA estimation methods such as MUSIC, Root-MUSIC and ESPRIT methods are paid great attention because of their brilliant properties in estimating DOAs of incident signals. Those methods achieve high accuracy in estimating DOAs in a good propagation environment, but would fail to estimate DOAs in severe environments like low Signal-to-Noise Ratio (SNR), small number of snapshots, or when incident waves are coming from close angles. In MUSIC method, its spectrum is calculated based on the absolute value of the inner product between array response and noise eigenvectors, means that MUSIC employs only the amplitude characteristics and does not use any phase characteristics. Recalling that phase characteristics plays an important role in signal and image processing, we expect that DOA estimation accuracy could be further improved using phase information in addition to MUSIC spectrum. This paper develops a procedure to obtain an accurate spectrum for DOA estimation using unwrapped and differentiated phase information of MUSIC-based noise subspace. Performance of the proposed method is evaluated through computer simulation in comparison with some conventional estimation methods.
Multiple Dipole Sources Localization from the Scalp EEG Using a High-resolution Subspace Approach.
Ding, Lei; He, Bin
2005-01-01
We have developed a new algorithm, FINE, to enhance the spatial resolution and localization accuracy for closely-spaced sources, in the framework of the subspace source localization. Computer simulations were conducted in the present study to evaluate the performance of FINE, as compared with classic subspace source localization algorithms, i.e. MUSIC and RAP-MUSIC, in a realistic geometry head model by means of boundary element method (BEM). The results show that FINE could distinguish superficial simulated sources, with distance as low as 8.5 mm and deep simulated sources, with distance as low as 16.3 mm. Our results also show that the accuracy of source orientation estimates from FINE is better than MUSIC and RAP-MUSIC for closely-spaced sources. Motor potentials, obtained during finger movements in a human subject, were analyzed using FINE. The detailed neural activity distribution within the contralateral premotor areas and supplemental motor areas (SMA) is revealed by FINE as compared with MUSIC. The present study suggests that FINE has excellent spatial resolution in imaging neural sources.
3D deformable image matching: a hierarchical approach over nested subspaces
NASA Astrophysics Data System (ADS)
Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul
2000-06-01
This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.
Adaptive subspace detection of extended target in white Gaussian noise using sinc basis
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Wei; Li, Ming; Qu, Jian-She; Yang, Hui
2016-01-01
For the high resolution radar (HRR), the problem of detecting the extended target is considered in this paper. Based on a single observation, a new two-step detection based on sparse representation (TSDSR) method is proposed to detect the extended target in the presence of Gaussian noise with unknown covariance. In the new method, the Sinc dictionary is introduced to sparsely represent the high resolution range profile (HRRP). Meanwhile, adaptive subspace pursuit (ASP) is presented to recover the HRRP embedded in the Gaussian noise and estimate the noise covariance matrix. Based on the Sinc dictionary and the estimated noise covariance matrix, one step subspace detector (OSSD) for the first-order Gaussian (FOG) model without secondary data is adopted to realise the extended target detection. Finally, the proposed TSDSR method is applied to raw HRR data. Experimental results demonstrate that HRRPs of different targets can be sparsely represented very well with the Sinc dictionary. Moreover, the new method can estimate the noise power with tiny errors and have a good detection performance.
NASA Astrophysics Data System (ADS)
La Cour, Brian R.; Ostrove, Corey I.
2017-01-01
This paper describes a novel approach to solving unstructured search problems using a classical, signal-based emulation of a quantum computer. The classical nature of the representation allows one to perform subspace projections in addition to the usual unitary gate operations. Although bandwidth requirements will limit the scale of problems that can be solved by this method, it can nevertheless provide a significant computational advantage for problems of limited size. In particular, we find that, for the same number of noisy oracle calls, the proposed subspace projection method provides a higher probability of success for finding a solution than does an single application of Grover's algorithm on the same device.
Cai, Yunfeng; Bai, Zhaojun; Pask, John E.; Sukumar, N.
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal block preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.
Preconditioned alternating direction method of multipliers for inverse problems with constraints
NASA Astrophysics Data System (ADS)
Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie
2017-02-01
We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.
Gao, Zhi-xin; Rao, Jin; Li, Yuan-hai
2017-01-01
Postoperative cognitive dysfunction is a crucial public health issue that has been increasingly studied in efforts to reduce symptoms or prevent its occurrence. However, effective advances remain lacking. Hyperbaric oxygen preconditioning has proved to protect vital organs, such as the heart, liver, and brain. Recently, it has been introduced and widely studied in the prevention of postoperative cognitive dysfunction, with promising results. However, the neuroprotective mechanisms underlying this phenomenon remain controversial. This review summarizes and highlights the definition and application of hyperbaric oxygen preconditioning, the perniciousness and pathogenetic mechanism underlying postoperative cognitive dysfunction, and the effects that hyperbaric oxygen preconditioning has on postoperative cognitive dysfunction. Finally, we conclude that hyperbaric oxygen preconditioning is an effective and feasible method to prevent, alleviate, and improve postoperative cognitive dysfunction, and that its mechanism of action is very complex, involving the stimulation of endogenous antioxidant and anti-inflammation defense systems.
Gao, Zhi-Xin; Rao, Jin; Li, Yuan-Hai
2017-02-01
Postoperative cognitive dysfunction is a crucial public health issue that has been increasingly studied in efforts to reduce symptoms or prevent its occurrence. However, effective advances remain lacking. Hyperbaric oxygen preconditioning has proved to protect vital organs, such as the heart, liver, and brain. Recently, it has been introduced and widely studied in the prevention of postoperative cognitive dysfunction, with promising results. However, the neuroprotective mechanisms underlying this phenomenon remain controversial. This review summarizes and highlights the definition and application of hyperbaric oxygen preconditioning, the perniciousness and pathogenetic mechanism underlying postoperative cognitive dysfunction, and the effects that hyperbaric oxygen preconditioning has on postoperative cognitive dysfunction. Finally, we conclude that hyperbaric oxygen preconditioning is an effective and feasible method to prevent, alleviate, and improve postoperative cognitive dysfunction, and that its mechanism of action is very complex, involving the stimulation of endogenous antioxidant and anti-inflammation defense systems.
Kulinskiĭ, V I; Gavrilina, T V; Minakina, L N; Kovtun, V Iu
2006-01-01
Different types of hypoxic preconditioning (hypoxic, circulatory, hemic and tissue hypoxia) increase the tolerance to complete global cerebral ischemia at early terms (hours). Biochemico-pharmacological analysis with the use of selective agonists and antagonists showed the importance of adenosine A1-receptors and K+(ATP)-channels in the mechanisms of the neuroprotective effect and natural tolerance. The general scheme of the investigated mechanisms of different types of hypoxic preconditioning has been proposed.
Preconditioned domain decomposition scheme for three-dimensional aerodynamic sensitivity analysis
NASA Technical Reports Server (NTRS)
Eleshaky, Mohammed E.; Baysal, Oktay
1993-01-01
A preconditioned domain decomposition scheme is introduced for the solution of the 3D aerodynamic sensitivity equation. This scheme uses the iterative GMRES procedure to solve the effective sensitivity equation of the boundary-interface cells in the sensitivity analysis domain-decomposition scheme. Excluding the dense matrices and the effect of cross terms between boundary-interfaces is found to produce an efficient preconditioning matrix.
Ischemic Preconditioning and Placebo Intervention Improves Resistance Exercise Performance.
Marocolo, Moacir; Willardson, Jeffrey M; Marocolo, Isabela C; da Mota, Gustavo Ribeiro; Simão, Roberto; Maior, Alex S
2016-05-01
This study evaluated the effect of ischemic preconditioning (IPC) on resistance exercise performance in the lower limbs. Thirteen men participated in a randomized crossover design that involved 3 separate sessions (IPC, PLACEBO, and control). A 12-repetition maximum (12RM) load for the leg extension exercise was assessed through test and retest sessions before the first experimental session. The IPC session consisted of 4 cycles of 5 minutes of occlusion at 220 mm Hg of pressure alternated with 5 minutes of reperfusion at 0 mm Hg for a total of 40 minutes. The PLACEBO session consisted of 4 cycles of 5 minutes of cuff administration at 20 mm Hg of pressure alternated with 5 minutes of pseudo-reperfusion at 0 mm Hg for a total of 40 minutes. The occlusion and reperfusion phases were conducted alternately between the thighs, with subjects remaining seated. No ischemic pressure was applied during the control (CON) session and subjects sat passively for 40 minutes. Eight minutes after IPC, PLACEBO, or CON, subjects performed 3 repetition maximum sets of the leg extension (2-minute rest between sets) with the predetermined 12RM load. Four minutes after the third set for each condition, blood lactate was assessed. The results showed that for the first set, the number of repetitions significantly increased for both the IPC (13.08 ± 2.11; p = 0.0036) and PLACEBO (13.15 ± 0.88; p = 0.0016) conditions, but not for the CON (11.88 ± 1.07; p > 0.99) condition. In addition, the IPC and PLACEBO conditions resulted insignificantly greater repetitions vs. the CON condition on the first set (p = 0.015; p = 0.007) and second set (p = 0.011; p = 0.019), but not on the third set (p = 0.68; p > 0.99). No difference (p = 0.465) was found in the fatigue index and lactate concentration between conditions. These results indicate that IPC and PLACEBO IPC may have small beneficial effects on repetition performance over a CON condition. Owing to potential for greater discomfort associated
The significance of the washout period in Preconditioning.
Salie, Ruduwaan; Lochner, Amanda; Loubser, Dirk J
2017-01-24
Exposure of the heart to 5 min global ischaemia (I) followed by 5 min reperfusion (R) (ischaemic preconditioning, IPC) or transient Beta 2-adrenergic receptor (B2-AR) stimulation with formoterol (B2PC), followed by 5 min washout before index ischaemia, elicits cardioprotection against subsequent sustained ischaemia. Since the washout period during preconditioning is essential for subsequent cardioprotection, the aim of this study was to investigate the involvement of protein kinase A (PKA), reactive oxygen species (ROS), extracellular signal-regulated kinase (ERK), PKB/Akt, p38 MAPK and c-jun N-terminal kinase (JNK) during this period. Isolated perfused rat hearts were exposed to IPC (1x5min I / 5min R) or B2PC (1x5min Formoterol / 5min R) followed by 35 min regional ischaemia and reperfusion. Inhibitors for PKA (Rp-8CPT-cAMP)(16μM), ROS (NAC)(300μM), PKB (A-6730)(2.5μM), ERKp44/p42 (PD98,059)(10μM), p38MAPK (SB239063)(1μM) or JNK (SP600125)(10μM) were administered for 5 minutes before 5 minutes global ischaemia / 5 min reperfusion (IPC) or for 5 minutes before and during administration of formoterol ( B2PC) prior to regional ischaemia, reperfusion and infarct size (IS) determination. Hearts exposed to B2PC or IPC were freeze-clamped during the washout period for Western blots analysis of PKB, ERKp44/p42, p38MAPK and JNK. The PKA blocker abolished both B2PC and IPC, while NAC significantly increased IS of IPC but not of B2PC. Western blot analysis showed that ERKp44/p42 and PKB activation during washout after B2PC compared to IPC was significantly increased. IPC compared to B2PC showed significant p38MAPK and JNKp54/p46 activation. PKB and ERK inhibition or p38MAPK and JNK inhibition during the washout period of B2PC and IPC respectively, significantly increased IS. PKA activation before regional ischaemia is a prerequisite for cardioprotection in both B2PC and IPC. However, ROS was crucial only in IPC. Kinase activation during the washout phase of IPC and B2
Effects of Thermal Preconditioning on Tissue Susceptibility to Histotripsy.
Vlaisavljevich, Eli; Xu, Zhen; Arvidson, Alexa; Jin, Lifang; Roberts, William; Cain, Charles
2015-11-01
Histotripsy is a non-invasive ablation method that mechanically fractionates tissue by controlling acoustic cavitation. Previous work has revealed that tissue mechanical properties play a significant role in the histotripsy process, with stiffer tissues being more resistant to histotripsy-induced tissue damage. In this study, we propose a thermal pretreatment strategy to precondition tissues before histotripsy. We hypothesize that a thermal pretreatment can be used to alter tissue stiffness by modulating collagen composition, thus changing tissue susceptibility to histotripsy. More specifically, we hypothesize that tissues will soften and become more susceptible to histotripsy when preheated at ∼60°C because of collagen denaturation, but that tissues will rapidly stiffen and become less susceptible to histotripsy when preheated at ∼90°C because of collagen contraction. To test this hypothesis, a controlled temperature water bath was used to heat various ex vivo bovine tissues (tongue, artery, liver, kidney medulla, tendon and urethra). After heating, the Young's modulus of each tissue sample was measured using a tissue elastometer, and changes in tissue composition (i.e., collagen structure/density) were analyzed histologically. The susceptibility of tissues to histotripsy was investigated by treating the samples using a 750-kHz histotripsy transducer. Results revealed a decrease in stiffness and an increase in susceptibility to histotripsy for tissues (except urethra) preheated to 58°C. In contrast, preheating to 90°C increased tissue stiffness and reduced susceptibility to histotripsy for all tissues except tendon, which was significantly softened due to collagen hydrolysis into gelatin. On the basis of these results, a final set of experiments were conducted to determine the feasibility of using high-intensity focused ultrasound to provide the thermal pretreatment. Overall, the results of this study indicate the initial feasibility of a thermal
Key cognitive preconditions for the evolution of language.
Donald, Merlin
2017-02-01
Languages are socially constructed systems of expression, generated interactively in social networks, which can be assimilated by the individual brain as it develops. Languages co-evolved with culture, reflecting the changing complexity of human culture as it acquired the properties of a distributed cognitive system. Two key preconditions set the stage for the evolution of such cultures: a very general ability to rehearse and refine skills (evident early in hominin evolution in toolmaking), and the emergence of material culture as an external (to the brain) memory record that could retain and accumulate knowledge across generations. The ability to practice and rehearse skill provided immediate survival-related benefits in that it expanded the physical powers of early hominins, but the same adaptation also provided the imaginative substrate for a system of "mimetic" expression, such as found in ritual and pantomime, and in proto-words, which performed an expressive function somewhat like the home signs of deaf non-signers. The hominid brain continued to adapt to the increasing importance and complexity of culture as human interactions with material culture became more complex; above all, this entailed a gradual expansion in the integrative systems of the brain, especially those involved in the metacognitive supervision of self-performances. This supported a style of embodied mimetic imagination that improved the coordination of shared activities such as fire tending, but also in rituals and reciprocal mimetic games. The time-depth of this mimetic adaptation, and its role in both the construction and acquisition of languages, explains the importance of mimetic expression in the media, religion, and politics. Spoken language evolved out of voco-mimesis, and emerged long after the more basic abilities needed to refine skill and share intentions, probably coinciding with the common ancestor of sapient humans. Self-monitoring and self-supervised practice were necessary
Paracrine repercussions of preconditioning on angiogenesis and apoptosis of endothelial cells.
Raymond, Marc-André; Vigneault, Normand; Luyckx, Valerie; Hébert, Marie-Josée
2002-02-22
The mechanisms of cytoprotection conferred by stress preconditioning remain largely uncharacterized in endothelial cells (EC). We report that stress preconditioning of EC with serum starvation induces the release of soluble mediator(s) that confer resistance to apoptosis, increase proliferation, and enhance angiogenesis in a second set of "non-preconditioned" EC. Preconditioning was found to target specifically the mitochondrial control of apoptosis in EC with increased protein levels of Bcl-2, decreased protein levels of Bax, and decreased cytosolic release of cytochrome c. Regulators of apoptosis acting upstream and downstream of the mitochondria such as p53, cIAP-1, cIAP-2, and XIAP were not altered. Mediators classically associated with preconditioning in other cell types such as adenosine, opioids, and nitric oxide are not implicated in this cytoprotective loop. Blockade of protein kinase C-dependent signaling inhibited cytoprotection of EC. Further characterization of this paracrine pathway should provide insights into the molecular regulation of preconditioning in endothelial cells.
Turner, Ryan C.; Naser, Zachary J.; Lucke-Wold, Brandon P.; Logsdon, Aric F.; Vangilder, Reyna L.; Matsumoto, Rae R.; Huber, Jason D.; Rosen, Charles L.
2017-01-01
Aim Over 7 million traumatic brain injuries (TBI) are reported each year in the United States. However, treatments and neuroprotection following TBI are limited because secondary injury cascades are poorly understood. Lipopolysaccharide (LPS) administration before controlled cortical impact can contribute to neuroprotection. However, the underlying mechanisms and whether LPS preconditioning confers neuroprotection against closed-head injuries remains unclear. Methods The authors hypothesized that preconditioning with a low dose of LPS (0.2 mg/kg) would regulate glial reactivity and protect against diffuse axonal injury induced by weight drop. LPS was administered 7 days prior to TBI. LPS administration reduced locomotion, which recovered completely by time of injury. Results LPS preconditioning significantly reduced the post-injury gliosis response near the corpus callosum, possibly by downregulating the oncostatin M receptor. These novel findings demonstrate a protective role of LPS preconditioning against diffuse axonal injury. LPS preconditioning successfully prevented neurodegeneration near the corpus callosum, as measured by fluorojade B. Conclusion Further work is required to elucidate whether LPS preconditioning confers long-term protection against behavioral deficits and to elucidate the biochemical mechanisms responsible for LPS-induced neuroprotective effects. PMID:28164149
Convergence Acceleration of the Navier-Stokes Equations Through Time-Derivative Preconditioning
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Venkateswaran, Sankaran; Deshpande, Manish
1996-01-01
Chorin's method of artificial compressibility is extended to both compressible and incompressible fluids by using physical arguments to define artificial fluid properties that make up a local preconditioning matrix. In particular, perturbation expansions are used to provide appropriate temporal derivatives for the equations of motion at both low speeds and low Reynolds numbers. These limiting forms are then combined into a single function that smoothly merges into the physical time derivatives at high speeds so that the equations are left unchanged at transonic, high Reynolds number conditions. The effectiveness of the resulting preconditioning procedures for the Navier-Stokes equations is demonstrated for a wide speed and Reynolds number ranges by means of stability results and computational solutions. Nevertheless, the preconditioned equations sometimes fail to provide a solution for applications for which the non-preconditioned equations converge. Often this is because the reduced dissipation in the preconditioned equations results in an unsteady solution while the more dissipative non-preconditioned equations result in a steady state. Problems of this type represent a computational challenge; it is important to distinguish between non-convergence of algorithms, and the non-existence of steady state solutions.
Late cardiac preconditioning by exercise in dogs is mediated by mitochondrial potassium channels.
Parra, Víctor M; Macho, Pilar; Domenech, Raúl J
2010-09-01
We previously showed that exercise induces myocardial preconditioning in dogs and that early preconditioning is mediated through mitochondrial adenosine triphosphate-sensitive potassium channels. We decided to study if late preconditioning by exercise is also mediated through these channels. Forty-eight dogs, surgically instrumented and trained to run daily, were randomly assigned to 4 groups: (1) Nonpreconditioned dogs: under anesthesia, the coronary artery was occluded during 1 hour and then reperfused during 4.5 hours. (2) Late preconditioned dogs: similar to group 1, but the dogs run on the treadmill for 5 periods of 5 minutes each, 24 hours before the coronary occlusion. (3) Late preconditioned dogs plus 5-hydroxydecanoate (5HD): similar to group 2, but 5HD was administered before the coronary occlusion. (4) Nonpreconditioned dogs plus 5HD: similar to group 1, but 5HD was administered before the coronary occlusion. Infarct size (percent of the risk region) decreased by effect of exercise by 56% (P < 0.05), and this effect was abolished with 5HD. 5HD by itself did not modify infarct size. Exercise did not induce myocardial ischemia, and the hemodynamics during ischemia-reperfusion period did not differ among groups. These effects were independent of changes in collateral flow to the ischemic region. We concluded that late cardiac preconditioning by exercise is mediated through mitochondrial adenosine triphosphate-sensitive potassium channels.
Atorvastatin preconditioning improves the forward blood flow in the no-reflow rats.
Shao, Liang; Zhang, Yong; Ma, Aiqun; Zhang, Ping; Wu, Dayin; Li, Wenzhu; Wang, Jue; Liu, Kun; Wang, Zhaohui
2014-02-01
Atorvastatin is not only an antilipemic but also used as an anti-inflammatory medicine in heart disease. Our working hypothesis was that atorvastatin preconditioning could improve the forward blood flow in the no-reflow rats associated with inflammation. We investigated that two doses of atorvastatin preconditioning (20 and 5 mg/kg/day) could alleviate deterioration of early cardiac diastolic function in rats with inflammation detected by echocardiography and haemodynamics. This benefit was obtained from the effect of atorvastatin preconditioning on improving forward blood flow and preserving the infarct cardiomyocytes, which was estimated by Thioflavin S and TTC staining in rats with myocardial ischemia/reperfusion. Subsequently, the improving of forward blood flow was ascribed to reduction of microthrombus in microvascular and myocardial fibrosis observed by MSB and Masson's trichrome staining with atorvastatin preconditioning. Ultimately, we found that atorvastatin preconditioning could reduce inflammation factor, such as tumor necrosis factor-α and fibrinogen-like protein 2, both in myocardial and in mononuclear cells, which probably attribute to microcirculation dysfunction in no-reflow rats detected by immunohistochemistry staining, western blot, and ELISA detection, respectively. In conclusion, atorvastatin preconditioning could alleviate deterioration of early cardiac diastolic function and improve the forward blood flow in the no-reflow rats attributing to reduction of TNF-α and fgl-2 expression.
Determinants of Delayed Preconditioning Against Myocardial Stunning in Chronically-Instrumented Pigs
2009-01-01
To test the hypothesis that a critical stenosis prevents delayed preconditioning against stunning, studies were conducted in pigs chronically-instrumented with occluders and segment-shortening crystals. In the setting of a critical stenosis, a preconditioning stimulus of repetitive brief occlusions resulted in infarction. Thereafter, a single 10-minute occlusion was used as the preconditioning stimulus. Delayed preconditioning against stunning was documented on subsequent days by the deficit-of-function following brief repetitive occlusions. In contrast to experiments in the naïve heart, the deficit-of-function improved on the day after a single 10-minute occlusion (from 60±14 to 24±6 arbitrary units, p=0.003), and similar improvement occurred when reperfusion was performed through a critical stenosis (32±6 units, p=0.02 vs. naïve and p=0.34 vs. no stenosis). Delayed preconditioning also reduced the frequency of ventricular fibrillation, and produced a 4-fold increase in both calcium-dependent and calcium-independent NOS activity. Thus, a critical stenosis did not prevent delayed preconditioning against stunning. PMID:20160844
do Amaral e Silva Müller, Gabrielle; Vandresen-Filho, Samuel; Tavares, Carolina Pereira; Menegatti, Angela C O; Terenzi, Hernán; Tasca, Carla Inês; Severino, Patricia Cardoso
2013-05-01
Preconditioning induced by N-methyl-D-aspartate (NMDA) has been used as a therapeutic tool against later neuronal insults. NMDA preconditioning affords neuroprotection against convulsions and cellular damage induced by the NMDA receptor agonist, quinolinic acid (QA) with time-window dependence. This study aimed to evaluate the molecular alterations promoted by NMDA and to compare these alterations in different periods of time that are related to the presence or lack of neuroprotection. Putative mechanisms related to NMDA preconditioning were evaluated via a proteomic analysis by using a time-window study. After a subconvulsant and protective dose of NMDA administration mice, hippocampi were removed (1, 24 or 72 h) and total protein analyzed by 2DE gels and identified by MALDI-TOF. Differential protein expression among the time induction of NMDA preconditioning was observed. In the hippocampus of protected mice (24 h), four proteins: HSP70(B), aspartyl-tRNA synthetase, phosphatidylethanolamine binding protein and creatine kinase were found to be up-regulated. Two other proteins, HSP70(A) and V-type proton ATPase were found down-regulated. Proteomic analysis showed that the neuroprotection induced by NMDA preconditioning altered signaling pathways, cell energy maintenance and protein synthesis and processing. These events may occur in a sense to attenuate the excitotoxicity process during the activation of neuroprotection promoted by NMDA preconditioning.
Choi, Ji Ye; Park, Jeong-Min; Yi, Joo Mi; Leem, Sun-Hee; Kang, Tae-Hong
2015-09-08
The capacity of tumor cells for nucleotide excision repair (NER) is a major determinant of the efficacy of and resistance to DNA-damaging chemotherapeutics, such as cisplatin. Here, we demonstrate that using lesion-specific monoclonal antibodies, NER capacity is enhanced in human lung cancer cells after preconditioning with DNA-damaging agents. Preconditioning of cells with a nonlethal dose of UV radiation facilitated the kinetics of subsequent cisplatin repair and vice versa. Dual-incision assay confirmed that the enhanced NER capacity was sustained for 2 days. Checkpoint activation by ATR kinase and expression of NER factors were not altered significantly by the preconditioning, whereas association of XPA, the rate-limiting factor in NER, with chromatin was accelerated. In preconditioned cells, SIRT1 expression was increased, and this resulted in a decrease in acetylated XPA. Inhibition of SIRT1 abrogated the preconditioning-induced predominant XPA binding to DNA lesions. Taking these data together, we conclude that upregulated NER capacity in preconditioned lung cancer cells is caused partly by an increased level of SIRT1, which modulates XPA sensitivity to DNA damage. This study provides some insights into the molecular mechanism of chemoresistance through acquisition of enhanced DNA repair capacity in cancer cells.
Hu, Xiaowu; Yang, Junjie; Wang, Ying; Zhang, You; Ii, Masaaki; Shen, Zhenya; Hui, Jie
2015-01-01
Background: Cell-based angiogenesis is a promising treatment for ischemic diseases; however, survival of implanted cells is impaired by the ischemic microenvironment. In this study, mesenchymal stem cells (MSCs) for cell transplantation were preconditioned with trimetazidine (TMZ). We hypothesized that TMZ enhances the survival rate of MSCs under hypoxic stimuli through up-regulation of HIF1-α. Methods and results: Bone marrow-derived rat mesenchymal stem cells were preconditioned with 10 μM TMZ for 6 h. TMZ preconditioning of MSCs remarkably increased cell viability and the expression of HIF1-α and Bcl-2, when cells were under hypoxia/reoxygenation (H/R) stimuli. But the protective effects of TMZ were abolished after knocking down of HIF-1α. Three days after implantation of the cells into the peri-ischemic zone of rat myocardial ischemia-reperfusion (I/R) injury model, survival of the TMZ-preconditioned MSCs was high. Furthermore, capillary density and cardiac function were significantly better in the rats implanted with TMZ-preconditioned MSCs 28 days after cell injection. Conclusions: TMZ preconditioning increased the survival rate of MSCs, through up-regulation of HIF1-α, thus contributing to neovascularization and improved cardiac function of rats subjected to myocardial I/R injury. PMID:26629255
Kelty, Jonathan D; Noseworthy, Peter A; Feder, Martin E; Robertson, R Meldrum; Ramirez, Jan-Marino
2002-01-01
As with other tissues, exposing the mammalian CNS to nonlethal heat stress (i.e., thermal preconditioning) increases levels of heat-shock proteins (Hsps) such as Hsp70 and enhances the viability of neurons under subsequent stress. Using a medullary slice preparation from a neonatal mouse, including the site of the neural network that generates respiratory rhythm (the pre-Bötzinger complex), we show that thermal preconditioning has an additional fundamental effect, protection of synaptic function. Relative to 30 degrees C baseline, initial thermal stress (40 degrees C) greatly increased the frequency of synaptic currents recorded without pharmacological manipulation by approximately 17-fold (p < 0.01) and of miniature postsynaptic currents (mPSCs) elicited by GABA (20-fold) glutamate (10-fold), and glycine (36-fold). Thermal preconditioning (15 min at 40 degrees C) eliminated the increase in frequency of overall synaptic transmission during acute thermal stress and greatly attenuated the frequency increases of GABAergic, glutamatergic, and glycinergic mPSCs (for each, p < 0.05). Moreover, without thermal preconditioning, incubation of slices in solution containing inducible Hsp70 (Hsp72) mimicked the effect of thermal preconditioning on the stress-induced release of neurotransmitter. That preconditioning and exogenous Hsp72 can affect and preserve normal physiological function has important therapeutic implications.
Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine
2016-01-01
Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ. SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the
Caicedo, Alexander; Varon, Carolina; Hunyadi, Borbala; Papademetriou, Maria; Tachtsidis, Ilias; Van Huffel, Sabine
2016-01-01
Clinical data is comprised by a large number of synchronously collected biomedical signals that are measured at different locations. Deciphering the interrelationships of these signals can yield important information about their dependence providing some useful clinical diagnostic data. For instance, by computing the coupling between Near-Infrared Spectroscopy signals (NIRS) and systemic variables the status of the hemodynamic regulation mechanisms can be assessed. In this paper we introduce an algorithm for the decomposition of NIRS signals into additive components. The algorithm, SIgnal DEcomposition base on Obliques Subspace Projections (SIDE-ObSP), assumes that the measured NIRS signal is a linear combination of the systemic measurements, following the linear regression model y = Ax + ϵ. SIDE-ObSP decomposes the output such that, each component in the decomposition represents the sole linear influence of one corresponding regressor variable. This decomposition scheme aims at providing a better understanding of the relation between NIRS and systemic variables, and to provide a framework for the clinical interpretation of regression algorithms, thereby, facilitating their introduction into clinical practice. SIDE-ObSP combines oblique subspace projections (ObSP) with the structure of a mean average system in order to define adequate signal subspaces. To guarantee smoothness in the estimated regression parameters, as observed in normal physiological processes, we impose a Tikhonov regularization using a matrix differential operator. We evaluate the performance of SIDE-ObSP by using a synthetic dataset, and present two case studies in the field of cerebral hemodynamics monitoring using NIRS. In addition, we compare the performance of this method with other system identification techniques. In the first case study data from 20 neonates during the first 3 days of life was used, here SIDE-ObSP decoupled the influence of changes in arterial oxygen saturation from the
Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.
2013-12-01
Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for
EEG Subspace Analysis and Classification Using Principal Angles for Brain-Computer Interfaces
NASA Astrophysics Data System (ADS)
Ashari, Rehab Bahaaddin
Brain-Computer Interfaces (BCIs) help paralyzed people who have lost some or all of their ability to communicate and control the outside environment from loss of voluntary muscle control. Most BCIs are based on the classification of multichannel electroencephalography (EEG) signals recorded from users as they respond to external stimuli or perform various mental activities. The classification process is fraught with difficulties caused by electrical noise, signal artifacts, and nonstationarity. One approach to reducing the effects of similar difficulties in other domains is the use of principal angles between subspaces, which has been applied mostly to video sequences. This dissertation studies and examines different ideas using principal angles and subspaces concepts. It introduces a novel mathematical approach for comparing sets of EEG signals for use in new BCI technology. The success of the presented results show that principal angles are also a useful approach to the classification of EEG signals that are recorded during a BCI typing application. In this application, the appearance of a subject's desired letter is detected by identifying a P300-wave within a one-second window of EEG following the flash of a letter. Smoothing the signals before using them is the only preprocessing step that was implemented in this study. The smoothing process based on minimizing the second derivative in time is implemented to increase the classification accuracy instead of using the bandpass filter that relies on assumptions on the frequency content of EEG. This study examines four different ways of removing outliers that are based on the principal angles and shows that the outlier removal methods did not help in the presented situations. One of the concepts that this dissertation focused on is the effect of the number of trials on the classification accuracies. The achievement of the good classification results by using a small number of trials starting from two trials only
2014-12-01
signals classification ( MUSIC ) subspace direction-finding algorithm are evaluated in this thesis. Additionally, two performance enhancements are...presented: one that reduces the MUSIC computational load and one that provides a method of utilizing collector motion to resolve DOA ambiguities.
Kochen-Specker Theorem as a Precondition for Quantum Computing
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao
2016-12-01
We study the relation between the Kochen-Specker theorem (the KS theorem) and quantum computing. The KS theorem rules out a realistic theory of the KS type. We consider the realistic theory of the KS type that the results of measurements are either +1 or -1. We discuss an inconsistency between the realistic theory of the KS type and the controllability of quantum computing. We have to give up the controllability if we accept the realistic theory of the KS type. We discuss an inconsistency between the realistic theory of the KS type and the observability of quantum computing. We discuss the inconsistency by using the double-slit experiment as the most basic experiment in quantum mechanics. This experiment can be for an easy detector to a Pauli observable. We cannot accept the realistic theory of the KS type to simulate the double-slit experiment in a significant specific case. The realistic theory of the KS type can not depicture quantum detector. In short, we have to give up both the observability and the controllability if we accept the realistic theory of the KS type. Therefore, the KS theorem is a precondition for quantum computing, i.e., the realistic theory of the KS type should be ruled out.
Positive Indian Ocean Dipole events precondition southeast Australia bushfires
NASA Astrophysics Data System (ADS)
Cai, W.; Cowan, T.; Raupach, M.
2009-10-01
The devastating “Black Saturday” bushfire inferno in the southeast Australian state of Victoria in early February 2009 and the “Ash Wednesday” bushfires in February 1983 were both preceded by a positive Indian Ocean Dipole (pIOD) event. Is there a systematic pIOD linkage beyond these two natural disasters? We show that out of 21 significant bushfires seasons since 1950, 11 were preceded by a pIOD. During Victoria's wet season, particularly spring, a pIOD contributes to lower rainfall and higher temperatures exacerbating the dry conditions and increasing the fuel load leading into summer. Consequently, pIODs are effective in preconditioning Victoria for bushfires, more so than El Niño events, as seen in the impact on soil moisture on interannual time scales and in multi-decadal changes since the 1950s. Given that the recent increase in pIOD occurrences is consistent with what is expected from global warming, an increased bushfire risk in the future is likely across southeast Australia.
Human sensory preconditioning in a flavor preference paradigm.
Privitera, Gregory J; Mulcahey, Colleen P; Orlowski, Cassandra M
2012-10-01
This experiment adapted a sensory preconditioning (SPC) procedure using human participants to determine if conditioning (Cond) to one flavor (the conditioned flavor) will enhance liking for another flavor (the SPC flavor) associated with it prior to training. Participants in one of three groups (N=40 per group) consumed and rated plain or sweetened cherry and grape kool-aids in four phases. In baseline and SPC phase, ratings for a plain cherry, grape, and cherry-grape mixture were similar. In training, one flavor was sweetened (SPC+Cond and Cond Only groups) or unsweetened (SPC Only group) and ratings increased only for the flavor that was sweetened. In test, Group SPC+Cond rated the conditioned flavor and the SPC flavor as more liked and tasting sweeter. Group Cond Only rated only the conditioned flavor as more liked and tasting sweeter. Group SPC Only showed no change in ratings from baseline to test. These are the first data to show SPC learning using a flavor preference paradigm with human participants.
Cardioprotection of ischemic preconditioning in rats involves upregulating adiponectin.
Wang, Hui; Wu, Wenjing; Duan, Jun; Ma, Ming; Kong, Wei; Ke, Yuannan; Li, Gang; Zheng, Jingang
2017-02-20
It has been reported that ischemic preconditioning (IPC) and adiponectin (APN) are cardioprotective in many cardiovascular disorders. However, whether APN mediates the effect of IPC on myocardial injury has not been elucidated. This study was conducted to investigate whether IPC affects myocardial ischemic injury by increasing APN expression. Male adult rats with cardiac knockdowns of APN and its receptors via intramyocardial small-interfering RNA injection were subjected to IPC and then myocardial infarction (MI) at 24 h post-IPC. Globular APN (gAd) was injected at 10 min before MI. APN mRNA and protein levels in myocardium as well as the plasma APN concentration were markedly high at 6 and 12 h after IPC. IPC ameliorated myocardial injury as evidenced by improved cardiac functions and a reduced infarct size. Compared with the control MI group, rats in the IPC + MI group had elevated levels of left ventricular ejection fraction and fractional shortening, and a smaller MI size (P<0.05). However, the aforementioned protective effects were ameliorated in the absence of APN and APN receptors, followed by inhibition of AMP-activated protein kinase (AMPK) phosphorylation, but reversed by gAd treatment in wild-type rats, and AMPK phosphorylation increased (P<0.05). Overall, our results suggest that the cardioprotective effects of IPC are partially due to upregulation of APN, and provide a further insight into IPC-mediated signaling effects.
Sirtinol abrogates late phase of cardiac ischemia preconditioning in rats.
Safari, Fereshteh; Shekarforoosh, Shahnaz; Hashemi, Tahmineh; Namvar Aghdash, Simin; Fekri, Asefeh; Safari, Fatemeh
2016-09-27
The aim of this study was to investigate the effect of sirtinol, as an inhibitor of sirtuin NAD-dependent histone deacetylases, on myocardial ischemia reperfusion injury following early and late ischemia preconditioning (IPC). Rats underwent sustained ischemia and reperfusion (IR) alone or proceeded by early or late IPC. Sirtinol (S) was administered before IPC. Arrhythmias were evaluated based on the Lambeth model. Infarct size (IS) was measured using triphenyltetrazolium chloride staining. The transcription level of antioxidant-coding genes was assessed by real-time PCR. In early and late IPC groups, IS and the number of arrhythmia were significantly decreased (P < 0.05 and P < 0.01 vs IR, respectively). In S + early IPC, incidences of arrhythmia and IS were not different compared with the early IPC group. However, in S + late IPC the IS was different from the late IPC group (P < 0.05). In late IPC but not early IPC, transcription levels of catalase (P < 0.01) and Mn-SOD (P < 0.05) increased, although this upregulation was not significant in the S + late IPC group. Our results are consistent with the notion that different mechanisms are responsible for early and late IPC. In addition, sirtuin NAD-dependent histone deacetylases may be implicated in late IPC-induced cardioprotection.
Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit
NASA Astrophysics Data System (ADS)
Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin
2015-03-01
Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.
Ischemic Preconditioning Enhances Muscle Endurance during Sustained Isometric Exercise.
Tanaka, D; Suga, T; Tanaka, T; Kido, K; Honjo, T; Fujita, S; Hamaoka, T; Isaka, T
2016-07-01
Ischemic preconditioning (IPC) enhances whole-body exercise endurance. However, it is poorly understood whether the beneficial effects originate from systemic (e. g., cardiovascular system) or peripheral (e. g., skeletal muscle) adaptations. The present study examined the effects of IPC on local muscle endurance during fatiguing isometric exercise. 12 male subjects performed sustained isometric unilateral knee-extension exercise at 20% of maximal voluntary contraction until failure. Prior to the exercise, subjects completed IPC or control (CON) treatments. During exercise trial, electromyography activity and near-infrared spectroscopy-derived deoxygenation in skeletal muscle were continuously recorded. Endurance time to task failure was significantly longer in IPC than in CON (mean±SE; 233±9 vs. 198±9 s, P<0.001). Quadriceps electromyography activity was not significantly different between IPC and CON. In contrast, deoxygenation dynamics in the quadriceps vastus lateralis muscle was significantly faster in IPC than in CON (27.1±3.4 vs. 35.0±3.6 s, P<0.01). The present study found that IPC can enhance muscular endurance during fatiguing isometric exercise. Moreover, IPC accelerated muscle deoxygenation dynamics during the exercise. Therefore, we suggest that the origin of beneficial effects of IPC on exercise performance may be the enhanced mitochondrial metabolism in skeletal muscle.
Prolonged preconditioning with natural honey against myocardial infarction injuries.
Eteraf-Oskouei, Tahereh; Shaseb, Elnaz; Ghaffary, Saba; Najafi, Moslem
2013-07-01
Potential protective effects of prolonged preconditioning with natural honey against myocardial infarction were investigated. Male Wistar rats were pre-treated with honey (1%, 2% and 4%) for 45 days then their hearts were isolated and mounted on a Langendorff apparatus and perfused with a modified Krebs-Henseleit solution during 30 min regional ischemia fallowed by 120 min reperfusion. Two important indexes of ischemia-induced damage (infarction size and arrhythmias) were determined by computerized planimetry and ECG analysis, respectively. Honey (1% and 2%) reduced infarct size from 23±3.1% (control) to 9.7±2.4 and 9.5±2.3%, respectively (P<0.001). At the ischemia, honey (1%) significantly reduced (P<0.05) the number and duration of ventricular tachycardia (VT). Honey (1% and 2%) also significantly decreased number of ventricular ectopic beats (VEBs). In addition, incidence and duration of reversible ventricular fibrillation (Rev VF) were lowered by honey 2% (P<0.05). During reperfusion, honey produced significant reduction in the incidences of VT, total and Rev VF, duration and number of VT. The results showed cardioprotective effects of prolonged pre-treatment of rats with honey following myocardial infarction. Maybe, the existence of antioxidants and energy sources (glucose and fructose) in honey composition and improvement of hemodynamic functions may involve in those protective effects.
Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han
2015-04-08
Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model.
Joint DOA and multi-pitch estimation based on subspace techniques
NASA Astrophysics Data System (ADS)
Xi Zhang, Johan; Christensen, Mads Græsbøll; Jensen, Søren Holdt; Moonen, Marc
2012-12-01
In this article, we present a novel method for high-resolution joint direction-of-arrivals (DOA) and multi-pitch estimation based on subspaces decomposed from a spatio-temporal data model. The resulting estimator is termed multi-channel harmonic MUSIC (MC-HMUSIC). It is capable of resolving sources under adverse conditions, unlike traditional methods, for example when multiple sources are impinging on the array from approximately the same angle or similar pitches. The effectiveness of the method is demonstrated on a simulated an-echoic array recordings with source signals from real recorded speech and clarinet. Furthermore, statistical evaluation with synthetic signals shows the increased robustness in DOA and fundamental frequency estimation, as compared with to a state-of-the-art reference method.
Subspace-based identification of a nonlinear spacecraft in the time and frequency domains
NASA Astrophysics Data System (ADS)
Noël, J. P.; Marchesiello, S.; Kerschen, G.
2014-02-01
The objective of the present paper is to address the identification of a strongly nonlinear satellite structure. To this end, two nonlinear subspace identification methods formulated in the time and frequency domains are exploited, referred to as the TNSI and FNSI methods, respectively. The modal parameters of the underlying linear structure and the coefficients of the nonlinearities will be estimated by these two approaches based on periodic random measurements. Their respective merits will also be discussed in terms of both accuracy and computational efficiency and the use of stabilisation diagrams in nonlinear system identification will be introduced. The application of interest is the SmallSat spacecraft developed by EADS-Astrium, which possesses an impact-type nonlinear device consisting of eight mechanical stops limiting the motion of an inertia wheel mounted on an elastomeric interface. This application is challenging for several reasons including the non-smooth nature of the nonlinearities, high modal density and high non-proportional damping.
Approximation from Shift-Invariant Subspaces of L sup 2 (R sup d)
1991-07-06
3.7) and (3.6) imply that (2ir)dE(f, S ( g *)h) 2 - (2ir)dhdE(fh, S ( g *)) 2 hdII(1 - XC) I12 = hd (y)2 27r Ef, =(27rh E~f = A hJ IRd\\C A d = h- d f If(y/h)12...until we have introduced some additional terminology and stated our main results. Associated to any closed subspace S of L2 (a~ d ) and f E L2(IR d ), we...proven (2.1) suggests that the calculation of integrals and inner products in- volving functions from S (O)) should be taken over the torus 11 d . This
Optimized virtual orbital subspace for faster GW calculations in localized basis
NASA Astrophysics Data System (ADS)
Bruneval, Fabien
2016-12-01
The popularity of the GW approximation to the self-energy to access the quasiparticle energies of molecules is constantly increasing. As the other methods addressing the electronic correlation, the GW self-energy unfortunately shows a very slow convergence with respect to the basis complexity, which precludes the calculation of accurate quasiparticle energies for large molecules. Here we propose a method to mitigate this issue that relies on two steps: (i) the definition of a reduced virtual orbital subspace, thanks to a much smaller basis set; (ii) the account of the remainder through the simpler one-ring approximation to the self-energy. We assess the quality of the corrected quasiparticle energies for simple molecules, and finally we show an application to large graphene chunks to demonstrate the numerical efficiency of the scheme.
Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423
Kerfriden, P; Schmidt, K M; Rabczuk, T; Bordas, S P A
We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model.
Initial Results in Power System Identification from Injected Probing Signals Using a Subspace Method
Zhou, Ning; Pierre, John W.; Hauer, John F.
2006-08-01
In this paper, the authors use the Numerical algorithm for Subspace State Space System IDentification (N4SID) to extract dynamic parameters from phasor measurements collected on the western North American Power Grid. The data were obtained during tests on June 7, 2000, and they represent wide area response to several kinds of probing signals including Low-Level Pseudo-Random Noise (LLPRN) and Single-Mode Square Wave (SMSW) injected at the Celilo terminal of the Pacific HVDC In-tertie (PDCI). An identified model is validated using a cross vali-dation method. Also, the obtained electromechanical modes are compared with the results from Prony analysis of a ringdown and with signal analysis of ambient data measured under similar op-erating conditions. The consistent results show that methods in this class can be highly effective even when the probing signal is small.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
Bischof, C.; Sun, X.; Huss-Lederman, S.; Tsao, A.; Turnbull, T.
1994-06-01
In this paper, we discuss work in progress on a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). We describe a recently developed acceleration technique that substantially reduces the overall work required by this algorithm and review the algorithmic highlights of a distributed-memory implementation of this approach. These include a fast matrix-matrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divide-and-conquer parallelism in the problem. We present performance results for the dominant kernel, dense matrix multiplication, as well as for the overall SYISDA implementation on the Intel Touchstone Delta and the Intel Paragon.
Engineering of a quantum state by time-dependent decoherence-free subspaces
NASA Astrophysics Data System (ADS)
Wu, S. L.
2015-03-01
We apply the time-dependent decoherence-free subspace theory to a Markovian open quantum system in order to present a proposal for a quantum-state engineering program. By quantifying the purity of the quantum state, we verify that the quantum-state engineering process designed via our method is completely unitary within any total engineering time. Even though the controls on the open quantum system are not perfect, the asymptotic purity is still robust. Owing to its ability to completely resist decoherence and the lack of restraint in terms of the total engineering time, our proposal is suitable for multitask quantum-state engineering program. Therefore, this proposal is not only useful for achieving the quantum-state engineering program experimentally, it also helps us build both a quantum simulation and quantum information equipment in reality.
Maronidis, Anastasios; Bolis, Dimitris; Tefas, Anastasios; Pitas, Ioannis
2011-10-01
In this paper, the robustness of appearance-based subspace learning techniques in geometrical transformations of the images is explored. A number of such techniques are presented and tested using four facial expression databases. A strong correlation between the recognition accuracy and the image registration error has been observed. Although it is common-knowledge that appearance-based methods are sensitive to image registration errors, there is no systematic experiment reported in the literature. As a result of these experiments, the training set enrichment with translated, scaled and rotated images is proposed for confronting the low robustness of these techniques in facial expression recognition. Moreover, person dependent training is proven to be much more accurate for facial expression recognition than generic learning.
Bayesian estimation of Karhunen-Loève expansions; A random subspace approach
NASA Astrophysics Data System (ADS)
Chowdhary, Kenny; Najm, Habib N.
2016-08-01
One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.
Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang
2007-08-01
This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.
Hayakawa, Kentaro; Okazaki, Rentaro; Morioka, Kazuhito; Nakamura, Kozo; Tanaka, Sakae; Ogata, Toru
2014-12-01
The inflammatory response following spinal cord injury (SCI) has both harmful and beneficial effects; however, it can be modulated for therapeutic benefit. Endotoxin/lipopolysaccharide (LPS) preconditioning, a well-established method for modifying the immune reaction, has been shown to attenuate damage induced by stroke and brain trauma in rodent models. Although such effects likely are conveyed by tissue-repairing functions of the inflammatory response, the mechanisms that control the effects have not yet been elucidated. The present study preconditioned C57BL6/J mice with 0.05 mg/kg of LPS 48 hr before inducing contusion SCI to investigate the effect of LPS preconditioning on the activation of macrophages/microglia. We found that LPS preconditioning promotes the polarization of M1/M2 macrophages/microglia toward an M2 phenotype in the injured spinal cord on quantitative real-time polymerase chain reaction, enzyme-linked immunosorbent assay, and immunohistochemical analyses. Flow cytometric analyses reveal that LPS preconditioning facilitates M2 activation in resident microglia but not in infiltrating macrophages. Augmented M2 activation was accompanied by vascularization around the injured lesion, resulting in improvement in both tissue reorganization and functional recovery. Furthermore, we found that M2 activation induced by LPS preconditioning is regulated by interleukin-10 gene expression, which was preceded by the transcriptional activation of interferon regulatory factor (IRF)-3, as demonstrated by Western blotting and an IRF-3 binding assay. Altogether, our findings demonstrate that LPS preconditioning has a therapeutic effect on SCI through the modulation of M1/M2 polarization of resident microglia. The present study suggests that controlling M1/M2 polarization through endotoxin signal transduction could become a promising therapeutic strategy for various central nervous system diseases. © 2014 Wiley Periodicals, Inc.
Pinto, Mauro Cunha Xavier; Lima, Isabel Vieira de Assis; da Costa, Flávia Lage Pessoa; Rosa, Daniela Valadão; Mendes-Goulart, Vânia Aparecida; Resende, Rodrigo Ribeiro; Romano-Silva, Marco Aurélio; de Oliveira, Antônio Carlos Pinheiro; Gomez, Marcus Vinícius; Gomez, Renato Santiago
2015-02-01
Brain preconditioning is a protective mechanism, which can be activated by sub-lethal stimulation of the NMDA receptors (NMDAR) and be used to achieve neuroprotection against stroke and neurodegenerative diseases models. Inhibitors of glycine transporters type 1 modulate glutamatergic neurotransmission through NMDAR, suggesting an alternative therapeutic strategy of brain preconditioning. The aim of this work was to evaluate the effects of brain preconditioning induced by NFPS, a GlyT1 inhibitor, against NMDA-induced excitotoxicity in mice hippocampus, as well as to study its neurochemical mechanisms. C57BL/6 mice (male, 10-weeks-old) were preconditioned by intraperitoneal injection of NFPS at doses of 1.25, 2.5 or 5.0 mg/kg, 24 h before intrahippocampal injection of NMDA. Neuronal death was evaluated by fluoro jade C staining and neurochemical parameters were evaluated by gas chromatography-mass spectrometry, scintillation spectrometry and western blot. We observed that NFPS preconditioning reduced neuronal death in CA1 region of hippocampus submitted to NMDA-induced excitotoxicity. The amino acids (glycine and glutamate) uptake and content were increased in hippocampus of animals treated with NFPS 5.0 mg/kg, which were associated to an increased expression of type-2 glycine transporter (GlyT2) and glutamate transporters (EAAT1, EAAT2 and EAAT3). The expression of GlyT1 was reduced in animals treated with NFPS. Interestingly, the preconditioning reduced expression of GluN2B subunits of NMDAR, whereas did not change the expression of GluN1 or GluN2A in all tested doses. Our study suggests that NFPS preconditioning induces resistance against excitotoxicity, which is associated with neurochemical changes and reduction of GluN2B-containing NMDAR expression.
2013-01-01
Introduction Mesenchymal stem cells (MSCs) have the potential for treatment of diabetic cardiomyopathy; however, the repair capability of MSCs declines with age and disease. MSCs from diabetic animals exhibit impaired survival, proliferation, and differentiation and therefore require a strategy to improve their function. The aim of the study was to develop a preconditioning strategy to augment the ability of MSCs from diabetes patients to repair the diabetic heart. Methods Diabetes was induced in C57BL/6 mice (6 to 8 weeks) with streptozotocin injections (55 mg/kg) for 5 consecutive days. MSCs isolated from diabetic animals were preconditioned with medium from cardiomyocytes exposed to oxidative stress and high glucose (HG/H-CCM). Results Gene expression of VEGF, ANG-1, GATA-4, NKx2.5 MEF2c, PCNA, and eNOS was upregulated after preconditioning with HG/H-CCM, as evidenced by reverse transcriptase/polymerase chain reaction (RT-PCR). Concurrently, increased AKT phosphorylation, proliferation, angiogenic ability, and reduced levels of apoptosis were observed in HG/H-CCM-preconditioned diabetic MSCs compared with nontreated controls. HG/H-CCM-preconditioned diabetic-mouse-derived MSCs (dmMSCs) were transplanted in diabetic animals and demonstrated increased homing concomitant with augmented heart function. Gene expression of angiogenic and cardiac markers was significantly upregulated in conjunction with paracrine factors (IGF-1, HGF, SDF-1, FGF-2) and, in addition, reduced fibrosis, apoptosis, and increased angiogenesis was observed in diabetic hearts 4 weeks after transplantation of preconditioned dmMSCs compared with hearts with nontreated diabetic MSCs. Conclusions Preconditioning with HG/H-CCM enhances survival, proliferation, and the angiogenic ability of dmMSCs, augmenting their ability to improve function in a diabetic heart. PMID:23706645
Calik, Michael W; Shankarappa, Sahadev A; Langert, Kelly A; Stubbs, Evan B
2015-01-01
A short-term exposure to moderately intense physical exercise affords a novel measure of protection against autoimmune-mediated peripheral nerve injury. Here, we investigated the mechanism by which forced exercise attenuates the development and progression of experimental autoimmune neuritis (EAN), an established animal model of Guillain-Barré syndrome. Adult male Lewis rats remained sedentary (control) or were preconditioned with forced exercise (1.2 km/day × 3 weeks) prior to P2-antigen induction of EAN. Sedentary rats developed a monophasic course of EAN beginning on postimmunization day 12.3 ± 0.2 and reaching peak severity on day 17.0 ± 0.3 (N = 12). By comparison, forced-exercise preconditioned rats exhibited a similar monophasic course but with significant (p < .05) reduction of disease severity. Analysis of popliteal lymph nodes revealed a protective effect of exercise preconditioning on leukocyte composition and egress. Compared with sedentary controls, forced exercise preconditioning promoted a sustained twofold retention of P2-antigen responsive leukocytes. The percentage distribution of pro-inflammatory (Th1) lymphocytes retained in the nodes from sedentary EAN rats (5.1 ± 0.9%) was significantly greater than that present in nodes from forced-exercise preconditioned EAN rats (2.9 ± 0.6%) or from adjuvant controls (2.0 ± 0.3%). In contrast, the percentage of anti-inflammatory (Th2) lymphocytes (7-10%) and that of cytotoxic T lymphocytes (∼20%) remained unaltered by forced exercise preconditioning. These data do not support an exercise-inducible shift in Th1:Th2 cell bias. Rather, preconditioning with forced exercise elicits a sustained attenuation of EAN severity, in part, by altering the composition and egress of autoreactive proinflammatory (Th1) lymphocytes from draining lymph nodes.
Gardner, David; Woodward, Carol S.; Evans, Katherine J
2015-01-01
Efficient solution of global climate models requires effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a time step dictated by accuracy of the processes of interest rather than by stability governed by the fastest of the time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton s method is applied for these systems. Each iteration of the Newton s method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but this Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite-difference which may show a loss of accuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite-difference approximations of these matrix-vector products for climate dynamics within the spectral-element based shallow-water dynamical-core of the Community Atmosphere Model (CAM).
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2015-09-01
The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but this Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3D code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois; Holland, David; Payne, Tony; Price, Stephen; Knoll, Dana
2011-01-01
We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over the Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.
Preconditioning of Interplanetary Space Due to Transient CME Disturbances
NASA Astrophysics Data System (ADS)
Temmer, M.; Reiss, M. A.; Nikolic, L.; Hofmeister, S. J.; Veronig, A. M.
2017-02-01
Interplanetary space is characteristically structured mainly by high-speed solar wind streams emanating from coronal holes and transient disturbances such as coronal mass ejections (CMEs). While high-speed solar wind streams pose a continuous outflow, CMEs abruptly disrupt the rather steady structure, causing large deviations from the quiet solar wind conditions. For the first time, we give a quantification of the duration of disturbed conditions (preconditioning) for interplanetary space caused by CMEs. To this aim, we investigate the plasma speed component of the solar wind and the impact of in situ detected interplanetary CMEs (ICMEs), compared to different background solar wind models (ESWF, WSA, persistence model) for the time range 2011–2015. We quantify in terms of standard error measures the deviations between modeled background solar wind speed and observed solar wind speed. Using the mean absolute error, we obtain an average deviation for quiet solar activity within a range of 75.1–83.1 km s‑1. Compared to this baseline level, periods within the ICME interval showed an increase of 18%–32% above the expected background, and the period of two days after the ICME displayed an increase of 9%–24%. We obtain a total duration of enhanced deviations over about three and up to six days after the ICME start, which is much longer than the average duration of an ICME disturbance itself (∼1.3 days), concluding that interplanetary space needs ∼2–5 days to recover from the impact of ICMEs. The obtained results have strong implications for studying CME propagation behavior and also for space weather forecasting.
Resveratrol preconditioning protects against cerebral ischemic injury via Nrf2
Narayanan, Srinivasan V.; Dave, Kunjan R.; Saul, Isa; Perez-Pinzon, Miguel A.
2015-01-01
Background and Purpose Nuclear erythroid 2 related factor 2 (Nrf2) is an astrocyte-enriched transcription factor that has previously been shown to upregulate cellular antioxidant systems in response to ischemia. While resveratrol preconditioning (RPC) has emerged as a potential neuroprotective therapy, the involvement of Nrf2 in RPC-induced neuroprotection and mitochondrial reactive oxygen species (ROS) production following cerebral ischemia remains unclear. The goal of our study was to study the contribution of Nrf2 to RPC and its effects on mitochondrial function. Methods We used rodent astrocyte cultures and an in vivo stroke model with RPC. An Nrf2 DNA-binding ELISA and protein analysis via Western blotting of downstream Nrf2 targets were performed to determine RPC-induced activation of Nrf2 in rat and mouse astrocytes. Following RPC, mitochondrial function was determined by measuring ROS production and mitochondrial respiration in both wild-type (WT) and Nrf2−/− mice. Infarct volume was measured to determine neuroprotection, while protein levels were measured by immunoblotting. Results We report that Nrf2 is activated by RPC in rodent astrocyte cultures, and that loss of Nrf2 reduced RPC-mediated neuroprotection in a mouse model of focal cerebral ischemia. In addition, we observed that wild-type and Nrf2−/− cortical mitochondria exhibited increased uncoupling and ROS production following RPC treatments, Finally, Nrf2−/− astrocytes exhibited decreased mitochondrial antioxidant expression and were unable to upregulate cellular antioxidants following RPC treatment. Conclusion Nrf2 contributes to RPC-induced neuroprotection through maintaining mitochondrial coupling and antioxidant protein expression. PMID:25908459
Exercise preconditioning attenuates pressure overload-induced pathological cardiac hypertrophy
Xu, Tongyi; Tang, Hao; Zhang, Ben; Cai, Chengliang; Liu, Xiaohong; Han, Qingqi; Zou, Liangjian
2015-01-01
Pathological cardiac hypertrophy, a common response of the heart to a variety of cardiovascular diseases, is typically associated with myocytes remodeling and fibrotic replacement, cardiac dysfunction. Exercise preconditioning (EP) increases the myocardial mechanical load and enhances tolerance of cardiac ischemia-reperfusion injury (IRI), however, is less reported in pathological cardiac hypertrophy. To determine the effect of EP in pathological cardiac hypertrophy, Male 10-wk-old Sprague-Dawley rats (n=30) were subjected to 4 weeks of EP followed by 4-8 weeks of pressure overload (transverse aortic constriction, TAC) to induce pathological remodeling. TAC in untrained controls (n=30) led to pathological cardiac hypertrophy, depressed systolic function. We observed that left ventricular wall thickness in end diastole, heart size, heart weight-to-body weight ratio, heart weight-to-tibia length ratio, cross-sectional area of cardiomyocytes and the reactivation of fetal genes (atrial natriuretic peptide and brain natriuretic peptide) were markedly increased, meanwhile left ventricular internal dimension at end-diastole, systolic function were significantly decreased by TAC at 4 wks after operation (P < 0.01), all of which were effectively inhibited by EP treatment (P < 0.05), but the differences of these parameters were decreased at 8 wks after operation. Furthermore, EP treatment inhibited degradation of IκBα, and decreased NF-κB p65 subunit levels in the nuclear fraction, and then reduced IL2 levels in the myocardium of rats subject to TAC. EP can effectively attenuate pathological cardiac hypertrophic responses induced by TAC possibly through inhibition of degradation of IκB and blockade of the NF-κB signaling pathway in the early stage of pathological cardiac hypertrophy. PMID:25755743
Glaciations in response to climate variations preconditioned by evolving topography.
Pedersen, Vivi Kathrine; Egholm, David Lundbek
2013-01-10
Landscapes modified by glacial erosion show a distinct distribution of surface area with elevation (hypsometry). In particular, the height of these regions is influenced by climatic gradients controlling the altitude where glacial and periglacial processes are the most active, and as a result, surface area is focused just below the snowline altitude. Yet the effect of this distinct glacial hypsometric signature on glacial extent and therefore on continued glacial erosion has not previously been examined. Here we show how this topographic configuration influences the climatic sensitivity of Alpine glaciers, and how the development of a glacial hypsometric distribution influences the intensity of glaciations on timescales of more than a few glacial cycles. We find that the relationship between variations in climate and the resulting variation in areal extent of glaciation changes drastically with the degree of glacial modification in the landscape. First, in landscapes with novel glaciations, a nearly linear relationship between climate and glacial area exists. Second, in previously glaciated landscapes with extensive area at a similar elevation, highly nonlinear and rapid glacial expansions occur with minimal climate forcing, once the snowline reaches the hypsometric maximum. Our results also show that erosion associated with glaciations before the mid-Pleistocene transition at around 950,000 years ago probably preconditioned the landscape--producing glacial landforms and hypsometric maxima--such that ongoing cooling led to a significant change in glacial extent and erosion, resulting in more extensive glaciations and valley deepening in the late Pleistocene epoch. We thus provide a mechanism that explains previous observations from exposure dating and low-temperature thermochronology in the European Alps, and suggest that there is a strong topographic control on the most recent Quaternary period glaciations.
Downward-Propagating Temperature Anomalies in the Preconditioned Polar Stratosphere.
NASA Astrophysics Data System (ADS)
Zhou, Shuntai; Miller, Alvin J.; Wang, Julian; Angell, James K.
2002-04-01
Dynamical links of the Northern Hemisphere stratosphere and troposphere are studied, with an emphasis on whether stratospheric changes have a direct effect on tropospheric weather and climate. In particular, downward propagation of stratospheric anomalies of polar temperature in the winter-spring season is examined based upon 22 years of NCEP-NCAR reanalysis data. It is found that the polar stratosphere is sometimes preconditioned, which allows a warm anomaly to propagate from the upper stratosphere to the troposphere, and sometimes it prohibits downward propagation. The Arctic Oscillation (AO) is more clearly seen in the former case. To understand what dynamical conditions dictate the stratospheric property of downward propagation, the upper-stratospheric warming episodes with very large anomalies (such as stratospheric sudden warming) are selected and divided into two categories according to their downward-propagating features. Eliassen-Palm (E-P) diagnostics and wave propagation theories are used to examine the characteristics of wave-mean flow interactions in the two different categories. It is found that in the propagating case the initial wave forcing is very large and the polar westerly wind is reversed. As a result, dynamically induced anomalies propagate down as the critical line descends. A positive feedback is that the dramatic change in zonal wind alters the refractive index in a way favorable for continuous poleward transport of wave energy. The second pulse of wave flux conducts polar warm anomalies farther down. Consequently, the upper-tropospheric circulations are changed, in particular, the subtropical North Atlantic jet stream shifts to the south by 5 degrees of latitude, and the alignment of the jet stream becomes more zonal, which is similar to the negative phase of the North Atlantic Oscillation (NAO).
Meclizine Preconditioning Protects the Kidney Against Ischemia-Reperfusion Injury.
Kishi, Seiji; Campanholle, Gabriela; Gohil, Vishal M; Perocchi, Fabiana; Brooks, Craig R; Morizane, Ryuji; Sabbisetti, Venkata; Ichimura, Takaharu; Mootha, Vamsi K; Bonventre, Joseph V
2015-09-01
Global or local ischemia contributes to the pathogenesis of acute kidney injury (AKI). Currently there are no specific therapies to prevent AKI. Potentiation of glycolytic metabolism and attenuation of mitochondrial respiration may decrease cell injury and reduce reactive oxygen species generation from the mitochondria. Meclizine, an over-the-counter anti-nausea and -dizziness drug, was identified in a 'nutrient-sensitized' chemical screen. Pretreatment with 100 mg/kg of meclizine, 17 h prior to ischemia protected mice from IRI. Serum creatinine levels at 24 h after IRI were 0.13 ± 0.06 mg/dl (sham, n = 3), 1.59 ± 0.10 mg/dl (vehicle, n = 8) and 0.89 ± 0.11 mg/dl (meclizine, n = 8). Kidney injury was significantly decreased in meclizine treated mice compared with vehicle group (p < 0.001). Protection was also seen when meclizine was administered 24 h prior to ischemia. Meclizine reduced inflammation, mitochondrial oxygen consumption, oxidative stress, mitochondrial fragmentation, and tubular injury. Meclizine preconditioned kidney tubular epithelial cells, exposed to blockade of glycolytic and oxidative metabolism with 2-deoxyglucose and NaCN, had reduced LDH and cytochrome c release. Meclizine upregulated glycolysis in glucose-containing media and reduced cellular ATP levels in galactose-containing media. Meclizine inhibited the Kennedy pathway and caused rapid accumulation of phosphoethanolamine. Phosphoethanolamine recapitulated meclizine-induced protection both in vitro and in vivo.
Human amniotic fluid stem cell preconditioning improves their regenerative potential.
Rota, Cinzia; Imberti, Barbara; Pozzobon, Michela; Piccoli, Martina; De Coppi, Paolo; Atala, Anthony; Gagliardini, Elena; Xinaris, Christodoulos; Benedetti, Valentina; Fabricio, Aline S C; Squarcina, Elisa; Abbate, Mauro; Benigni, Ariela; Remuzzi, Giuseppe; Morigi, Marina
2012-07-20
Human amniotic fluid stem (hAFS) cells, a novel class of broadly multipotent stem cells that share characteristics of both embryonic and adult stem cells, have been regarded as promising candidate for cell therapy. Taking advantage by the well-established murine model of acute kidney injury (AKI), we studied the proregenerative effect of hAFS cells in immunodeficient mice injected with the nephrotoxic drug cisplatin. Infusion of hAFS cells in cisplatin mice improved renal function and limited tubular damage, although not to control level, and prolonged animal survival. Human AFS cells engrafted injured kidney predominantly in peritubular region without acquiring tubular epithelial markers. Human AFS cells exerted antiapoptotic effect, activated Akt, and stimulated proliferation of tubular cells possibly via local release of factors, including interleukin-6, vascular endothelial growth factor, and stromal cell-derived factor-1, which we documented in vitro to be produced by hAFS cells. The therapeutic potential of hAFS cells was enhanced by cell pretreatment with glial cell line-derived neurotrophic factor (GDNF), which markedly ameliorated renal function and tubular injury by increasing stem cell homing to the tubulointerstitial compartment. By in vitro studies, GDNF increased hAFS cell production of growth factors, motility, and expression of receptors involved in cell homing and survival. These findings indicate that hAFS cells can promote functional recovery and contribute to renal regeneration in AKI mice via local production of mitogenic and prosurvival factors. The effects of hAFS cells can be remarkably enhanced by GDNF preconditioning.
Human Amniotic Fluid Stem Cell Preconditioning Improves Their Regenerative Potential
Rota, Cinzia; Imberti, Barbara; Pozzobon, Michela; Piccoli, Martina; De Coppi, Paolo; Atala, Anthony; Gagliardini, Elena; Xinaris, Christodoulos; Benedetti, Valentina; Fabricio, Aline S.C.; Squarcina, Elisa; Abbate, Mauro; Benigni, Ariela; Remuzzi, Giuseppe
2012-01-01
Human amniotic fluid stem (hAFS) cells, a novel class of broadly multipotent stem cells that share characteristics of both embryonic and adult stem cells, have been regarded as promising candidate for cell therapy. Taking advantage by the well-established murine model of acute kidney injury (AKI), we studied the proregenerative effect of hAFS cells in immunodeficient mice injected with the nephrotoxic drug cisplatin. Infusion of hAFS cells in cisplatin mice improved renal function and limited tubular damage, although not to control level, and prolonged animal survival. Human AFS cells engrafted injured kidney predominantly in peritubular region without acquiring tubular epithelial markers. Human AFS cells exerted antiapoptotic effect, activated Akt, and stimulated proliferation of tubular cells possibly via local release of factors, including interleukin-6, vascular endothelial growth factor, and stromal cell–derived factor-1, which we documented in vitro to be produced by hAFS cells. The therapeutic potential of hAFS cells was enhanced by cell pretreatment with glial cell line–derived neurotrophic factor (GDNF), which markedly ameliorated renal function and tubular injury by increasing stem cell homing to the tubulointerstitial compartment. By in vitro studies, GDNF increased hAFS cell production of growth factors, motility, and expression of receptors involved in cell homing and survival. These findings indicate that hAFS cells can promote functional recovery and contribute to renal regeneration in AKI mice via local production of mitogenic and prosurvival factors. The effects of hAFS cells can be remarkably enhanced by GDNF preconditioning. PMID:22066606
Gauss-Newton inspired preconditioned optimization in large deformation diffeomorphic metric mapping
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2014-10-01
In this work, we propose a novel preconditioned optimization method in the paradigm of Large Deformation Diffeomorphic Metric Mapping (LDDMM). The preconditioned update scheme is formulated for the non-stationary and the stationary parameterizations of diffeomorphisms, yielding three different LDDMM methods. The preconditioning matrices are inspired in the Hessian approximation used in Gauss-Newton method. The derivatives are computed using Frechet differentials. Thus, optimization is performed in a Sobolev space, in contrast to optimization in L2 commonly used in non-rigid registration literature. The proposed LDDMM methods have been evaluated and compared with their respective implementations of gradient descent optimization. Evaluation has been performed using real and simulated images from the Non-rigid Image Registration Evaluation Project (NIREP). The experiments conducted in this work reported that our preconditioned LDDMM methods achieved a performance similar or superior to well-established-in-literature gradient descent non-stationary LDDMM in the great majority of cases. Moreover, preconditioned optimization showed a substantial reduction in the execution time with an affordable increase of the memory usage per iteration. Additional experiments reported that optimization using Frechet differentials should be preferable to optimization using L2 differentials.
Helium preconditioning attenuates hypoxia/ischemia-induced injury in the developing brain.
Liu, Yi; Xue, Feng; Liu, Guoke; Shi, Xin; Liu, Yun; Liu, Wenwu; Luo, Xu; Sun, Xuejun; Kang, Zhimin
2011-02-28
Recent studies show helium may be one kind of neuroprotective gas. This study aimed to examine the short and long-term neuroprotective effects of helium preconditioning in an established neonatal cerebral hypoxia-ischemia (HI) model. Seven-day-old rat pups were subjected to left common carotid artery ligation and then 90 min of hypoxia (8% oxygen at 37°C). The preconditioning group inhaled 70% helium-30% oxygen for 5 min three times with an interval of 5 min 24h before HI insult. Pups were decapitated 24h after HI and brain morphological injury was assessed by 2,3,5-triphenyltetrazolium chloride (TTC) staining, Nissl and TUNEL staining. Caspase-3 activity in the brain was measured. Five weeks after HI, postural reflex testing and Morris water maze testing were conducted. Our results showed that helium preconditioning reduced the infarct ratio, increased the number of survival neurons, and inhibited apoptosis at the early stage of HI insult. Furthermore, the sensorimotor function and the cognitive function were improved significantly in rats with helium preconditioning. The results indicate that helium preconditioning attenuates HI induced brain injury.
Analysis of physics-based preconditioning for single-phase subchannel equations
Hansel, J. E.; Ragusa, J. C.; Allu, S.; Berrill, M. A.; Clarno, K. T.
2013-07-01
The (single-phase) subchannel approximations are used throughout nuclear engineering to provide an efficient flow simulation because the computational burden is much smaller than for computational fluid dynamics (CFD) simulations, and empirical relations have been developed and validated to provide accurate solutions in appropriate flow regimes. Here, the subchannel equations have been recast in a residual form suitable for a multi-physics framework. The Eigen spectrum of the Jacobian matrix, along with several potential physics-based preconditioning approaches, are evaluated, and the the potential for improved convergence from preconditioning is assessed. The physics-based preconditioner options include several forms of reduced equations that decouple the subchannels by neglecting crossflow, conduction, and/or both turbulent momentum and energy exchange between subchannels. Eigen-scopy analysis shows that preconditioning moves clusters of eigenvalues away from zero and toward one. A test problem is run with and without preconditioning. Without preconditioning, the solution failed to converge using GMRES, but application of any of the preconditioners allowed the solution to converge. (authors)
Zhang, Qichun; Bian, Huimin; Guo, Liwei; Zhu, Huaxu
2016-01-01
Pharmacologic preconditioning is an intriguing and emerging approach adopted to prevent injury of ischemia/reperfusion. Neuroprotection is the cardinal effect of these pleiotropic actions of berberine. Here we investigated that whether berberine could acts as a preconditioning stimuli contributing to attenuate hypoxia-induced neurons death as well. Male Sprague-Dawley rats of middle cerebral artery occlusion (MCAO) and rat primary cortical neurons undergoing oxygen and glucose deprivation (OGD) were preconditioned with berberine (40 mg/kg, for 24 h in vivo, and 10-6 mol/L, for 2 h in vitro, respectively). The neurological deficits and cerebral water contents of MCAO rats were evaluated. The autophagy and apoptosis were further determined in primary neurons in vitro. Berberine preconditioning (BP) was then shown to ameliorate the neurological deficits, decrease cerebral water content and promote neurogenesis of MCAO rats. Decreased LDH release from OGD-treated neurons was observed via BP, which was blocked by LY294002 (20 µmol/L), GSK690693 (10 µmol/L), or YC-1 (25 µmol/L). Furthermore, BP stimulated autophagy and inhibited apoptosis via modulated the autophagy-associated proteins LC 3, Beclin-1 and p62, and apoptosis-modulating proteins caspase 3, caspase 8, caspase 9, PARP and BCL-2/Bax. In conclusion, berberine acts as a stimulus of preconditioning that exhibits neuroprotection via promoting autophagy and decreasing anoxia-induced apoptosis. PMID:27158406
Gauss-Newton inspired preconditioned optimization in large deformation diffeomorphic metric mapping.
Hernandez, Monica
2014-10-21
In this work, we propose a novel preconditioned optimization method in the paradigm of Large Deformation Diffeomorphic Metric Mapping (LDDMM). The preconditioned update scheme is formulated for the non-stationary and the stationary parameterizations of diffeomorphisms, yielding three different LDDMM methods. The preconditioning matrices are inspired in the Hessian approximation used in Gauss-Newton method. The derivatives are computed using Frechet differentials. Thus, optimization is performed in a Sobolev space, in contrast to optimization in L(2) commonly used in non-rigid registration literature. The proposed LDDMM methods have been evaluated and compared with their respective implementations of gradient descent optimization. Evaluation has been performed using real and simulated images from the Non-rigid Image Registration Evaluation Project (NIREP). The experiments conducted in this work reported that our preconditioned LDDMM methods achieved a performance similar or superior to well-established-in-literature gradient descent non-stationary LDDMM in the great majority of cases. Moreover, preconditioned optimization showed a substantial reduction in the execution time with an affordable increase of the memory usage per iteration. Additional experiments reported that optimization using Frechet differentials should be preferable to optimization using L(2) differentials.
Riepe, M W; Esclaire, F; Kasischke, K; Schreiber, S; Nakase, H; Kempski, O; Ludolph, A C; Dirnagl, U; Hugon, J
1997-03-01
A short ischemic episode preceding sustained ischemia is known to increase tolerance against ischemic cell death. We report early-onset long-lasting neuroprotection against in vitro hypoxia by preceding selective chemical inhibition of oxidative phosphorylation: "chemical preconditioning." The amplitude of CA1 population spikes (psap) in hippocampal slices prepared from control animals (control slices) was 31 +/- 27% (mean +/- SD) upon 45-min recovery from 15-min in vitro hypoxia. In slices prepared from animals treated in vivo with 20 mg/kg 3-nitropropionate (3-np) 1-24 h prior to slice preparation (preconditioned slices), psap improved to 90 +/- 15% (p < 0.01). Posthypoxic oxygen free radicals were reduced to 65 +/- 10% (mean +/- SD) of control in preconditioned slices (p < 0.05). Posthypoxic neuronal density improved from 52 +/- 15% (mean +/- SD) in control slices to 97 +/- 23% in preconditioned slices (p < 0.001). Glibenclamide, an antagonist at KATP-channels, partly reversed increased hypoxic tolerance. We conclude that chemical preconditioning induces early-onset long-lasting tolerance against in vitro hypoxia. Ultimately, this strategy may be applicable as a neuroprotective strategy in humans.
SCHMUTHS, HEIKE; BACHMANN, KONRAD; WEBER, W. EBERHARD; HORRES, RALF; HOFFMANN, MATTHIAS H.
2006-01-01
• Background and Aims Germination and establishment of seeds are complex traits affected by a wide range of internal and external influences. The effects of parental temperature preconditioning and temperature during germination on germination and establishment of Arabidopsis thaliana were examined. • Methods Seeds from parental plants grown at 14 and at 22 °C were screened for germination (protrusion of radicle) and establishment (greening of cotyledons) at three different temperatures (10, 18 and 26 °C). Seventy-three accessions from across the entire distribution range of A. thaliana were included. • Key Results Multifactorial analyses of variances revealed significant differences in the effects of genotypes, preconditioning, temperature treatment, and their interactions on duration of germination and establishment. Reaction norms showed an enormous range of plasticity among the preconditioning and different germination temperatures. Correlations of percentage total germination and establishment after 38 d with the geographical origin of accessions were only significant for 14 °C preconditioning but not for 22 °C preconditioning. Correlations with temperature and precipitation on the origin of the accessions were mainly found at the lower germination temperatures (10 and 18 °C) and were absent at higher germination temperatures (26 °C). • Conclusions Overall, the data show huge variation of germination and establishment among natural accessions of A. thaliana and might serve as a valuable source for further germination and plasticity studies. PMID:16464878